Note: TrafficLand has modified their process of serving images and I have not had the time to update PyTrAn or the web interface to handle it.
Some time ago I came across a traffic camera service from TrafficLand covering the DC, Virginia and Maryland metro area. They make individual camera feeds available free of charge and provide a pay-for service featuring a multi-screen interface. The images are not the best quality, here is a live example from Key bridge in North West DC (camera ID 200003):
After 30 minutes of playing with the free feed I got motiviated and wrote a script to enumerate the camera list and created my own custom
multi-screen interface allowing for regex searches and customizable row/column control. This interface is available at:
$ mkdir mask_200003 $ cd mask_200003 $ ../image_collector.py 200003 30 Collecting 30 images... 30 Done.
The script is hard coded to capture images on a 2-second delay. The delay is necessary to ensure the image has changed. I believe 2-seconds to be the absolute minimum. Once complete, 30 images numbered 1 through 30 will be created in the current directory. We construct a mask from these captured images by creating a diff-image for each sequential image pair and then adding each diff-image together. Naturally, a script was written to automate this task as well:
$ ../mask_maker.py 1 30 Creating a diff for each sequential image pair. Diffing 29 Creating the initial mask from the first image pair. Adding the rest of the diffs to the mask. Masking 29 Done.
A number of .diff files are generated in this process. These files repesent the movement between individual sequence pairs. Here is one of the diff-images from our example:
The .diff files are simply intermediary files, the important bit is the 'mask' file, which is generated as the sum of all differences:
The mask file may be dirty (as in this case) and require manual cleanup. The basic shape of the road however is clearly visible, evidence that we can with minimal effort automate the mask generation process. Also, this run was conducted at night, day-time images yield better results. Here is what our mask looks like after it's been cleaned up by hand:
There are a few final steps we need to take before we can use the example PyTrAn driver script. First we need to convert the mask to ASCII (noraw) format:
$ pnmnoraw mask > mask_200003.ascii
Then we need to open an ImageMagick 'display' window and get it's X-window-ID using 'xwininfo'. Finally, update 'camera_id' and 'window_id' in pytran_sampling.py and launch the driver:
$ ../pytran_sampling.py DEBUG> grabbing frame from camera 200003 DEBUG> rotating image: pytran.this > pytran.last DEBUG> refreshing image in 3 secs taking a 5 minute sample at various thresholds. DEBUG> grabbing frame from camera 200003 DEBUG> generating frame diff on pytran.last, pytran.this DEBUG> displaying image: pytran.diff DEBUG> converting pytran.diff to ascii DEBUG> calculating traffic ratio... ratio: 55% DEBUG> calculating traffic ratio... ratio: 52% ... ... 5 minute sample: 67.88 5 minute sample: 42.66 5 minute sample: 30.57 5 minute sample: 23.03 5 minute sample: 18.39 5 minute sample: 14.79 5 minute sample: 12.42 5 minute sample: 10.53 5 minute sample: 9.06 5 minute sample: 7.85
The sampling script will take 5 minute samples at varying color thresholds. The optimal threshold must be manually chosen.
Furthermore, you will need to sample the traffic ratio's during both heavy and light traffic
times to get a good feel for your acceptable range. Also, keep in mind that the traffic ratio value is simply the percent change detected, or in other words the
movement detected within the masked region. This means that a completely empty road will register similar values to a road so
congested it looks like a parking lot. The time of day can be combined with the traffic ration to determine the logical truth.