Home : Map : Chapter 11 :Java : Tech : Physics :
Calibrations and Systematic Errors
JavaTech
Course Map
Chapter 11

Introduction
Image Class
Image Loading
  Demo 1 Demo 2  
Pixels/Transparency
  Demo 3
Pixel Handling
  Demo 4  
Demo 5
Exercises

    Supplements
Java 2D Imaging
BufferedImage
Creating Buf.Image
Pixel Handling
  Demo 1 Demo 2
Filters
  Convolutions
     Demo 3
  AffineTransforms
     Demo 4
  LookupTable
     Demo 5 Demo 6
  Rescale
     Demo 7
  Color Conversion
     Demo 8
  Custom
     Demo 9
Exercises
Java Adv Imaging
AWT Flicker:
  Override Update
     Demo 1  Demo 2
  Clipping
     Demo 3
  Double Buffer
     Demo 4

     About JavaTech
     Codes List
     Exercises
     Feedback
     References
     Resources
     Tips
     Topic Index
     Course Guide
     What's New

In Chapter 9: Physics we took the falling mass physics example and built around it a physics simulation, a detector simulation, and an experiment analysis tool, all in one big program.. Then in Chapter 10: Physics we split this into two parts: (1) an experiment simulation program, which contained the physics simulation and the detector simulation, and (2) an analysis program.

The experiment simulator wrote the data for the events (i.e. the drops) into a file that could be read by the analysis program. Data from a real experiment would go into files with the same format and examined by the analysis program in the same way. Variations between the two types of data would then highlight problems with the simulation or with the experiment or with the analysis.

Calibrations

Difference between simulated and real data could be due to instrument imperfections that produce fixed but significant variations from the "true" values. For example, an analog-to-digital conveter (ADC) converts an analog voltage to a numerical value. So a 1.0 volt input might give, say, a value of 255 for an 8-bit ADC. Perhaps, however, a 0.0 volt input doesn't produce a 0 output but a value of 5. Then this 5 is an instrumental offset that should be subtracted from the ADC's output. An ADC module might have, say, a dozen ADC inputs, or channels, and each could have an offset that varies somewhat from each other, e.g. 3 for one, 8 for another, etc.

There might also be variations in the slope of the analog -to-digital conversion. That is, say that the conversion goes as

N = C + S * V

where V is the input voltage, c is the constant offset, and N is the digital output. The slope S might vary slightly from one channel to another. This nonlinearity variation would also have to be removed before the data could be analyszed.

This correction of the data for known instrument offsets and channel variations is carried out in the calibration phase. This also refers to converting the scale of the instrument output to the units of interest. For example, the 0-255 range of our 8-bit ADC would need to be converted to the 0-1V scale.

Typically, you will carry out special "runs" with your instrument to determine the calibration. That is, you put in exact, known input values and then compare these with the outputs. For example, with our ADC we could put in a series of values stepping from 0V up to 1V and use the outputs to determine our offset and slope corrections.

Systematic Errors

Differences between a simulation and the real data might also be due to some aspect of the experiment that varied unexpectedly or because of an incorrect assumption about the instrument. This kind of uncertainity falls under the systematic error category. This differs from random errors (sometimes referred to as the statistical error), which are due to the fluctuations when the number of measurements is less than infinite.

In lab courses such systematic errors come up in the context of explaining the difference between accuracy and precision. A ruler, for example, might have very finely graded markings that allow you to read a measurement out to a fraction of a millimeter, but if you had not noticed that the lower end of the ruler had been worn down by a few millimeters, the measurements would be precise but inaccurate.

A famous case of this is the primary mirror for the Hubble Telescope. Its surface was ground down to a curvature that was extremely precise (1/20 of the wavelength of light). However, due to an incorrectly calibrated device used to measure the curvature, it was the wrong curvature.

It might seem that a systematic error would simply be fixed once it is found or calibrated out of the data. However, there are several situations where such solutions don't apply:

  1. The experiment data was already taken and it's impractical or impossible to redo the experiment.

  2. There are so many different possible systematic effects, it isn't practical to remove them all or calibrate them all out of the data.

  3. The underlying physics isn't perfectly understood and different simulations of the physics and the interaction with the experimental apparatus system lead to different results.

To overcome these problems, the simulation of the experiment allows you to estimate the systematic effects. You can vary different aspects of an experiment in the simulation and see what affect this has on the calculated results.

Case 3 above is common in high energy particle physics where one needs a simulation to correct for the areas around a collision point that are not covered by the detector system. These acceptance corrections would be required, for example, when calculating the total cross-section for a reaction of some sort. The assumptions on the loss of scattered particles down the beampipe might vary slightly from one simulation model to the next and so the resulting cross-section calculations might vary slightly but significantly. Different models and simulations are used and the variation on the final cross-section value would be determined.

Typically, a systematic error is shown separately from the random error as in

x = 2.34 +/-0.05 +/-0.10

where the +/-0.05 is the statistical error and the +/-0.10 is the systematic error, which would typically be the combined error of several systematic effects.

Note that another source of systematic variations might be due to differences in experimental appraratus and technique. A phenomena measured by different types of instruments, perhaps by different experimentalists in different parts of the world, might show that instruments of one type obtain different results for some unknown reason. An analyst trying to combine the results from several different independent experiments might put the instrument variations into the systematic error.

Note that if the experimental results are especially sensitive to a particular system parameter, one might want to redo the experiment and closely monitor that system parameter to insure that remains within its acceptable range.

Similarly, if you are using a simulation to design an experiment, the study of systematics will help you to decide what aspects of the system need to be controlled and monitored most closely. Conversely, you might find that even big variations in a particular parameter don't affect the result very much and so can get by without a complex and/or expensive system to control that parameter.

Demo Simulation

In the demo programs discussed on the following pages, we try to illuminate the above topics by adding instrument offsets, calibration runs, and systematic errors to our falling mass experiment simulation.

References & Web Resources

 

Most recent update: Nov. 14, 2005

 

              Tech
Fractals
Fractal Drawing
   Demo 1
Fractal Draw Code
Fractal Images
  Demo 2
Image Processing
  Demo 3
Histogram Image
  Demo 4
Exercises

           Physics
Calibration/SysError
SimWithCal/SysErr
  Demo 1
Analysis
  Demo 2
Examples

Exercises

  Part I Part II Part III
Java Core 1  2  3  4  5  6  7  8  9  10  11  12 13 14 15 16 17
18 19 20
21
22 23 24
Supplements

1  2  3  4  5  6  7  8  9  10  11  12

Tech 1  2  3  4  5  6  7  8  9  10  11  12
Physics 1  2  3  4  5  6  7  8  9  10  11  12

Java is a trademark of Sun Microsystems, Inc.