A Giant Map drawn with a Pen

Header

For quite some time I entertained the thought of having a wall-sized world map. Maybe it started when I read the blog post by Dominik Schwarz about his map, maybe a bit earlier. Exactly as Dominik I soon realized it’s really hard to buy a map which is neither ugly nor has a low resolution/amount of details.


Heezen-Tharp Bathymetry map Heezen-Tharp Bathymetry map crop japan

There are a few maps I enjoy visually and probably on the top of the list is the bathymetry map of Heezen and Tharp, drawn by Heinrich Berann. While the sea floor depth data itself is accurate and this is meant to be a scientific documentation, the map and its status as an art object benefits a lot from the artists execution.

Heezen-Tharp Bathymetry map crop europe

However, the process of getting from information to image involves in this case a manual artistic process. This is especially visible when having a look at the continental shelf.

I for one would prefer something a bit more automatic, after all I am not a professional illustrator. Like all things in life, something is considerably more complicated the closer you look at it and this is true for the process of turning geo-info into a map at a certain zoom-level. Here is a nice blog post reflecting on how much better Google Maps handles all these intricate details compared to other map software.

I experimented a bit with different map data sources, thought about inkjet printing and wrote a script to scrape google maps tiles and stitch them. That works kinda okay-ish, but the the classic Google Maps style did not really fit. For OpenStreetMaps you can find different map styles, the most beautiful (at least in my opinion) of those is Stamens Toner.

Stamen Toner

But using either google maps or Stamen’s Toner OSM tiles, this would basically result in me trying to print the largest image possible. But maybe Inkjet printing is not the best way to move forward. So, given our means of production how can we get a really large wall-sized world map? I did built some drawing machines (pen plotters) in the past, so why not do that?…

Data:

First step: how to get the data? Best data source is OpenStreetMap. No discussions. However, a dump from the OSM DB (the planet-file) is a whopping 50GB (compressed) and 1200GB uncompressed! Whew… No. Luckily, there are already pre-processed extracts for certain features and that’s all we need. Namely the land- and water-polygons, the coastline, a selection of cities including meta-data (population) and … the ocean floor depth. Luckily one doesn’t need to scrape Digital Terrain Model data by hand, but can rely on GEBCO.

On a glance:

Main problem: when working with pen plotters, we don’t deposit tiny droplets of ink based on pixel data, but draw lines with pens, based on vector data described as coordinates. So our whole image is basically just a gigantic assortements of lines. Drawing maps with pixels has one huge advantage: you can overwrite pixels. You draw the oceans, on top of that the land, on top of the land the streets, etc. When using pens and lines, you see if there is a line underneath the line underneath another line.

So when processing each polygon on each layer needs to be subtracted from all underlaying layers, repaired, simplified and written to an SVG file. That took me a while to figure out. Luckily there is Shapely, an incredibly well working python library for polygons. After creating a map, all the lines need to be sent to the pen-plotter in a order that makes sense. Drawing with the pen is quite fast, but lifting the pen to move to the start of the next line is extremly slow compared to drawing. So main objective is to optimize the pen’s path to minimize pen lifting movements and the travel distance in-between lines. Luckily, a long time ago, when starting to study computer science there is usually a course like “Introduction to Algorithms and Data Structures” (which almost all freshmen hate). The problem of “walking” along a certain set of streets in between cities (line start and end points) while taking the shortest route is the Chinese Postman Problem. So what’s the minimum number of edges that need to be added to graph to traverse the graph on the shortest route while visiting every node exactly once? Yeah, now do that for two million nodes…
Ok, it worked well for smaller graphs but in the end it was very little fun to optimize my crude implementation of an algorithm solving this problem and I dropped it. The greedy approach, however, did work well: draw a line, check which line’s start or endpoint is closest to the current position and draw that line. That seemed to be about 10 percent slower than the almost optimal solution, but heck, I just want to have something working well enough to get it done.

Hardware:

When I started building the plotter quite a while ago I was rather concerned about the size of the machine and how I could stow it efficiently. The plotter is made from Openbuilds V-Slot and 3D-printed connection elements. It’s a rectangle with belts in CoreXY-configuration and when removing the belts and undoing 2 screws per corner the plotter can be disassembled in a matter of minutes.

Squareplot Carriage Squareplot Carriage

Popular plotter designs (Axidraw) commonly use small servos to move the pen up and down. That’s inexpensive and totally suitable for something like lifting a pen but the sounds the servo generates are extremly nasty. When using a pancake stepper and some decent drivers, the plotter is almost silent.

Squareplot Carriage

I did use the TMC5160 steppers from watterott. They are expensive as hell but extremly quiet and have a built-in stall detection that can be used for sensorless homing.

Drawbot board

To control the motors I use GRBL, but GRBL can’t make use of any of that since you need to talk to the drivers via SPI. One can either patch GRBL (uargs…) or just use a second microcontroller. The second controller talks to the driver, checks for stall detection events and then acts like a physical endstop switch (by pulling down the first microcontrollers endstop pins). Yay, if you’re too lazy for software, just throw another bunch of hardware at the problem…

Drawbot board

The plotter requires only a minimal amount of torque, so running the motors on CAT5 cables works well and is extremly convenient. Each wire-pair drives one motor phase. Several meter of cable length are not an issue and no stepper motor looses steps.

Squareplot Connector

Plotting:

To actually get the lines on the wall I initially planned to draw directly on the wall with a slightly different pen-plotter build. However, in the end I spared my neighbours weird moments hearing the mysterious sounds of scratchy pens on the other side of the wall. I settled on plotting on cardboard sheets as tiles and used 2x4 (2 by 3 meters in total) to fit the map to the wall.

Map in room

To draw text I made use of Hershey fonts, stroke-based fonts originally developed for vector displays back in the ol’ days.

Hershey

While elevation data for land is drawn as countour lines, I did not want to do the same for bathymetry (ocean depth) data. Here I used hatchings of increasing density to display deeper regions as more blue-ish in color.

hatching

That resulted in a few issues since the 45 degree hatching lines are really sensitive to placement. Every mechanical system only runs up to certain level of precision and blacklash in the drive system is one of main reasons for this. Since I am using almost 5m long belts to control the movement of the pen in the pen-plotter, there is a quite an amount of slack. Every other line is slightly too close to it’s neighbouring line. That results in a kind of banding effect for large monotonous hatching-regions.

Plotting took about a day per tile and all in all I used up quite an amount of pens:

Empty pens

A Stabilo ballpoint pen is good for about 250m of lines (quite impressive). The lines are split into files to automatically pause after a certain number of meters for pen replacement.

Afterwards the cardboard tiles are screwed to 3d-printed connectors which are sitting on the wall and allow for a bit of alignment (in hindsight this was a very good idea).

Connection

All in all I would say I am quite happy with the results:

Map detail Map detail

Hardware and software can be found on github:

Further reading:

  • There are wonderful pen-drawn terrain maps by Michael Fogleman (see here). Highly recommended if you appreciate the visual style of these kind of maps.
  • For large-scale plotting people often refer to the Polargraph. If you are new to pen plotting, you can find a lot of inspiration there.
  • When just wanting to have a stab at pen plotting without worrying too much about hardware, the Axidraw is an excellent choice. An inkscape extension allows for easy drawing and sketching. EvilMadScientist is now even selling a nice litte DIY kit.
  • The twitter hashtag plottertwitter is highly recommended.
Update: occasionally people reach out to me asking if they can buy this plotter. I usually respond that I don't build things that are ready-to-use products which are suitable to be sold.
I am always happy to recommend the Axidraw A3 plotter which is the largest modern pen plotter you can buy right now and it has excellent software support via Inkscape.

Neutral Density Filter on CS-Mount Fisheyes

Getting a neutral density filter on a fisheye is a horrible mess. Some fisheye lenses solve this in a sane manner by having an integrated filter mount somewhere in the lens barrel (usually in between some lens elements close to the camera, not the front). That’s about the worst possible solution for the problem. Some people use tape, glue or magnets to fit ND filter foils to the rear element of their lens. For all the other lenses, there are … curved, globe filters. Since I plan to use the lens on a fixed camera, smoked acrylic domes used for surveillance cameras may be an option as well.

Acrylic Dome

But … maybe there is an alternative: the camera I use is a Raspberry Pi HQ Camera Module with a screw-thread-type CS-mount. Maybe there are screw-type filters for the mount itself?

CS-Mount

Yes! Indeed, there is one company which is selling exactly what I need: Midopt. The issue: really hard to get if you are not a business customer and do not have an address in the United States.

So, time for a homebrew solution:
I ordered a plain 20mm glass piece coated with an ND filter from Aliexpress and 3d-printed a springy friction-fit holder. Designing the holder is a bit tricky since it needs to be easily removable while keeping the glass piece from moving around (causing abrasion and chipping).

Fusion360

The holder is printed with a 0.25mm nozzle on a Prusa Mini: Slicer

Does it work? Indeed.

Filter in mount Filter in mount


Files:

Fusion360 design file
STEP design file
STL file

I did print it with a 0.25mm nozzle at 0.15mm layer height. Material is PETG. It might work with a standard 0.4mm nozzle as well, but I haven’t tested that.

Raspberry Pi Wide Angle Lens Comparison

A few months ago the Raspberry Pi HQ camera module was released. 12MP resolution, 1/2 inch (8mm) sensor, all in all: sounds ok. The interesting thing: you can change the lenses. The mount is apparently called a CS-Mount and is simply a screw thread. (Modern) CS lenses however are mostly CCTV/surveillance camera lenses and that’s a rather suspicious market if you want to go shopping (lots of moderately shady web shops).
There are two official lenses sold alongside the camera module, a 16mm telephoto-lens and a 6mm wide-angle one. Given that the module has a 1/2 inch sensor, the multiplication factor to compare to full-frame cameras would be 5.4, so the 6mm lens is equivalent to focal length of 32.4mm on a regular camera… Maybe there is something that’s a bit more … actually wide-angle?

All three lenses All three lenses

Let’s compare three lenses:

Official 6mm Lens

Inexpensive and easy to purchase. The aperture can be set manually, but there are no markings (so it’s a guessing game)

Official wide angle lens Official wide angle lens example photo


Arecont Vision CS-Mount 4.0mm F1.8 Lens

Available at B&H-photo, rather pricey but small.

Arecont 4mm lens Arecont 4mm lens example photo


No-name 3.2mm F2 Lens

Available at several online shops with various brandings or directly from Aliexpress.
Very long barrel, wide field of view. Nice crisp and sharp image without a lot of chromatic abberations. Distortion is very visible.

No-name 3.2mm lens No-name 3.2mm lens example photo


Side by side:

side by side comparison (100% crop, Official Wide Angle | Arecont 4mm | No-name 3.2mm)

Digital Solargraphy

Update: a number of people got in contact with me after writing this blog post inquiring how exactly they can get or build their own camera. The hardware and software described below is not very compatible with people that are not exactly me. I did change a lot in the meantime and now I am quite confident that the whole thing is a bit easier to use. I did set up a page to describe how to get your own kit for assembling the new camera:

If you want to capture your own digital solargraphy images, have a look: Digital Solargraphy

Solargraphies (pinhole images on photograhic paper that capture months of the sun arching across the horizon) were a thing starting sometime in the 200Xs (the first decade of the century(?), whatever…). When this caught on broadly in the early 201Xs it got a lot of people excited for film again. Quite a few people apparently started dropping cans with paper and pinholes in woods and the public urban space and I very much like this idea.
Solargraphy.com (run by Tarja Trygg) is collecting hundreds of wonderful examples.

A few other relevant links:

  • Interview with Tarja Trygg: lomography.de
  • Interview with Jens Edinger about how to build (and hide) pinhole cans [in german]: lomography.de
  • Flickr Solargraphy Group: flickr
  • Motorized Solargraphy Analemmas: analemma.pl
  • People are even doing timelapses with them: petapixel.com
  • one of the very few examples (actually the only one I could find) of digital day-long sun exposures: link
  • Some Solargraphies I very much like are from Jip Lambermont: Zonnekijkster
  • Most of the analogue landscape/city images of Michael Wesely could be called Solargraphies, too.

While these pinhole cameras built from beer cans and sewer plumbing tubes have a very appealing DIY character, you can even buy them off-the-shelf by now (boo!).

No, I’m kidding. Offering pre-assembled kits makes solargraphies way more accessible and having easy-to-build hardware is certainly something this project lacks.

However, I really like film (or paper in this instance) but I got rid of all my analogue equipment. For a reason: it’s horrible to handle. So, how about doing the same but without film?

Theory

The problem:

It’s easy to create digital long exposures. Reduce the sensors’ exposure to light and let it run for a few seconds. If you want to go longer you will realize that after a few seconds it will get horribly noisy. The next step up in the game is taking many single exposures and averaging them. This way an arbitarily long exposure can be simulated quite well in software. When using a weighted average based on exposure value from the single images, even day long exposures are possible. Nice! Except that won’t work for solargraphy images. While the sun burns into the film and marks it permanently, the extremly bright spot/streak of the sun is averaged away and won’t be visible in the digital ultra long exposure. Darn…

24 hour digital long exposure:

result:

Only averaging

So, how can we solve this problem? While taking single exposures we need to keep track of the spots of the film that would be “burned” or solarized. For every image we take (with the correct exposure) we take another image right away with the least amount of light possible hitting the sensor. We assume that every bit of light that would have hit the sensor in our second, much darker exposure would have been sufficiently bright to permanently mark the film.

Lets take a step back for a moment and talk about EV or Exposure Value. A correctly exposed image done at 1s with f/1.0 and ISO 100 has an EV of 0. Half a second with the same aperture and ISO settings is EV 1, quarter of a second EV 2, … So Wikipedia lists a scene with a cloudy or overcast sky at about EV 13, a cloud-free full sunlight moment at EV 16. A standard (DSLR/mirrorless) camera reaches about 1/4000th of a second exposure time, most lenses f/22 and the lowest ISO setting is either 25, 50 or 100. 1/4000s @ f/22 and ISO 100 is equal to EV 20 to 22. So we can use EV as a way to describe the amount of brightness in a scene (if we would expose it correctly) and – at the same time – as a measure of whats the maximum amount of brightness a camera can handle without overexposing. Basically how many photons are hitting the camera and how many photons can the camera successfully block during exposure. Whats the EV value to (reliably) determine which parts of the film would have been permanently marked? Generally, as a rule of thumb: the clearer the sky, the less clouds, the less haze, the less particles and water droplets in the atmosphere that reflect light, the lower the max EV value of the camera may be. So, can a camera at 1/4000s with aperture 22 and ISO 100 capture so few photons that we can assume that a certain part of the image is extremly bright: sometimes. Every piece of cloud that gets backlit by the sun gets incredibly bright and if the camera is not able to step down/reduce the brightness sufficiently it’s impossible to reliably determine if this spot would have been bright enough to leave a mark (spoiler, it wouldn’t, but it’s impossible then to differentiate between a bright cloud and an unblocked view of the sun.) To step down to EV 20 suffices only for very clear days, if unknown conditions are to be expected (nearly always in europe sadly), then at least 24 is required in my experience.

However, there is an easy way to move the window of min/max possibly capturable EV values by the camera: a neutral-density filter. That reduces the amount of light that hits the sensor considerably, so the camera won’t be able to capture images in the dusk or dawn or the night, but that’s not a problem in our case since these images wouldn’t be relevant for a multi-day long exposure anyway (compared to the bright daytime their impact on the overall image is negligible). When using a ND64 filter (64 or 2 to the power of 6) it takes away about 6 EV (ND filters are never precise) and thus gives us 26 as the max EV value. How does that look?

Correctly exposed image (EV: 11) ND filter comparison

Slightly darker (EV: 14) ND filter comparison

Close to what most DSLRs achieve out of the box (EV: 19) ND filter comparison

Aaaand here we go (EV: 26) ND filter comparison

Does that suffice: I would say, yes.

Software

So, how to process this? Take a correctly exposed photo every X seconds and a second photo at EV 26 right away too. From all the first photos the long exposure image is calculated by doing a weighted average based on metadata. We can calculate the EV value from the EXIF data of the image, apply an offset to the value and use 2 to the power of the offsetted EV value as our weight for averaging pixel values.
For the set of second images we can’t do that, we would average out all burned image sections/pixels. There we just overlay every image and keep the brightest pixels of all images.

Afterwards we take the long exposure image and burn all the bright pixels with the data from our sun overlay:

Weimarhallenpark

Terrific! But how many images are required and how fast do we need to take them?
Interval duration depends on the focal length (the wider the image, the smaller the sun, the longer the time in between images may last). In my case for a wide angle image (about 24mm) 60s seem to be the minium and 45s would be preferrable. If the interval exceeds 60s the arc of the sun is reduced to overlaying circles and finally just something like a string of pearls. One way to cheat is by applying a bit of gaussian smooting on the sun overlay image to help break up the hard edges and smooth out the sun circles.

90 second interval: artifacts (gaps are caused by a partially clouded sky which blocked the sun)

The number of images for the long exposure depends on the amount of movement but a number of 60 to 90 images works well even for tiny details.

Hardware

Tada?…

Nice. We got a feasible way of creating a digital solargraphy. Except, we need to actually take/make one. How to get a (relatively) disposable camera out there that may be snatched away by pesky birds or even peskier public servants at any moment? Some solargraphy enthusiasts report 30 to 50 percent loss of cameras when placing them out in the wild for half a year (winter to summer solistice, i.e. highest to lowest point of the sun). I won’t do six months, but being prepared for losing a camera or two might be a good idea. The smallest and least expensive camera I (you?) can build is basically a Raspberry Pi Zero with a Pi Camera Module. That’s featuring a whoppy 8 megapixels but I guess that’s ok, we don’t want this to be ultra-sharp glossy fine-art prints. Combined with some electronics for turning it on and off to take a picture-pair at given intervals, a battery, a smartphone attachment lens and some horribly strong neodym-magnets we wrap this in a 3D-printed enclosure.

hardware 1 hardware 2 hardware 3 hardware 4 hardware 5

A bit of technical details: a Raspberry Pi hat featuring a SAMD21 microcontroller (the Arduino Zero chip) draws power from two 18650 batteries and switches the Pi on every 60s (if it’s bright outside) or at slower intervals if the camera reports less light. The pi boots, takes a few images and powers off again. The system is powered by the batteries for 2.5 days, generating about 10gb of data per day. In order to be fast enough to boot the system, measure the light, take several images, save them and power off in less than 60s the pi runs buildroot, a minimal linux distro instead of the bloated Raspbian.

enclosure

Getting the 3d printed box weatherproof is the hardest challenge when building this. I’ve had good results with a seal of 3mm very soft EPDM rubber string in a 3mm cavity.

enclosure gasket CAD enclosure gasket photo

Images

Examples from Weimar:

Theaterplatz Marktplatz Bauhaus Museum Frauenplan Platz der Demokratie August Baudert Platz Schloss Unibibliothek Bauhausuniversitaet

Caveats and flaws:

To determine burned parts/pixels I use a one-shot approach. Either exposure on a single image did suffice to permanently leave a mark or it didn’t. No cumulative measure is used in any way. If there is traffic and cars in the image, this results in a low-fidelity reproduction of the behaviour of film exposures. While reflections by glass and metal of the cars would result in a flurry cloud of tiny specks of burn-ins over a long amount of time on film, the punctual noise of only a few dozen or a hundred digital exposures using the one-shot method is less appealing to the eye. A good example of how this looks on a film image is this image from Michael Wesely. But: that’s something for another day.


“I want to do this too!”

Cool! However: Some assembly required. I may write a post with some more detailed info at some random time in the future. Resources for now:

  • The software I use for stacking, averaging and peaking is on github but please be advised: it is not exactly plug’n’play.

  • Eagle board files and schematics for the 2S Lipo Battery Raspberry Pi Hat can be found here

  • Fusion360 files for the watertight enclosure can be downloaded here

Got questions? Drop me a line.

Pedestrian Detection Visualization with Drones

Pedestrian movement data based on drone footage. 20min time snippets taken at 60-65m height.

The tracking pipeline looks like this:

Given 2.7k 25fps video data that works quite well. Cyclists and pedestrians are detected as pedestrians alike and cars only create few false positives. All in all way faster than object detection networks and totally OK for a quick visualization.

Heatmap is based on my python port of D3s hexbin.

Examples from Weimar:

Theaterplatz

Theaterplatz

Raw video with detection overlays (red: foreground, colored squares: distinct detections)


August-Baudert-Platz

August-Baudert-Platz August-Baudert-Platz


Herderplatz

Herderplatz


Frauenplan

Frauenplan


Goetheplatz

Goetheplatz


Marktplatz

Marktplatz

Gphoto2 as a buildroot package

For everyone who needs to run gphoto on a buildroot system and wants to save some time:

Create a new package named gphoto2 in the package dir and add these two files:

package/gphoto2/Config.in

config BR2_PACKAGE_GPHOTO2
    bool "gphoto2"
    select BR2_PACKAGE_POPT
    select BR2_PACKAGE_LIBGPHOTO2
        help
          gPhoto2 is a free, redistributable, ready to use set of digital 
          camera software applications for Unix-like systems, written by 
          a whole team of dedicated volunteers around the world. 
          It supports more than 2500 cameras.

          http://www.gphoto.org/

package/gphoto2/gphoto2.mk

GPHOTO2_VERSION = 2.5.23
GPHOTO2_SOURCE = gphoto2-$(GPHOTO2_VERSION).tar.bz2
GPHOTO2_SITE = https://downloads.sourceforge.net/project/gphoto/gphoto/$(GPHOTO2_VERSION)

GPHOTO2_LICENSE_FILES = COPYING
GPHOTO2_INSTALL_STAGING = YES

GPHOTO2_DEPENDENCIES = libgphoto2 popt

GPHOTO2_CONF_ENV = POPT_CFLAGS="-I$(STAGING_DIR)/usr/include" POPT_LIBS="-L$(STAGING_DIR)/usr -lpopt"

$(eval $(autotools-package))

If running buildroot with the external_tree option, the package needs to be added to the general Config.in file:

source "$BR2_EXTERNAL_(EXTERNAL_TREE_NAME)_PATH/package/gphoto2/Config.in"

otherwise to package/Config.in