Digital Solargraphy or the Art of Taking a Photo for a Day

Finally managed to do a video on digital solargraphy and explain the concept a bit more visually.

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

Rapid Prototyping Curved Mirrors


Sometimes one may require a non-planar mirror. Usually you can do that by turning and polishing a chunk of metal on a lathe until it is so smooth that the metal works like a mirror. Or you can achieve a mirror surface by grinding a piece of glass or coating plastic in a vacuum chamber. All of that is pretty slow and expensive.

But is there maybe an easier or faster way at the cost of a bit of precision? (yes)

In general there are three different types of shapes:

types of shapes

The material I use is laminated and metallized polystyrene. Since there is already a mirror surface on the material we don’t need to coat it as a second step. And as a thermoplastic is easily deformable and at room temperature pretty stiff so it keeps its shape.

Before I settled on Polystyrene I did a quick test of different mirror-like materials:

  • Coated acrylic glass
  • Metallized polystyrene
  • PVC foil with an aluminium layer
  • and Rustoleum Mirror Spray on a PETG sheet

Comparing this works pretty easy by bouncing light against different mirror materials onto a sheet of paper. My reference material is a silver-coated glass mirror, which is pretty standard stuff and the highest quality mirror you’ll find in your household.

reflection setup

The reflection of the projected test pattern is already looking pretty good.

reflection comparison

But if we subtract the image from the reference mirror, we just see the differences, so all the tiny imperfections and errors.

reflection comparison diff

We can see that acrylic glass looks quite ok, but has a few tears or cracks in the reflection surface. Laminated polystyrene causes a bit of color banding and has some issues, but these are well distributed among the whole surface and not as local as acrylic PVC foil is just straight-up garbage and the mirror spray even worse.

So, we’ve got a winner. The laminated polystyrene is something you can usually get this at a half millimeter or 1 millimeter thickness pretty much everywhere around the world. Sometimes in small arts and crafts shops, sometimes online. One valid alternative is vinyl which may be easier to get in some countries. If you go thinner your mirror gets imprecise, if you go thicker you will have a hard time deforming the material.

So, back to the mirror: You can model that in any CAD program and just pretend you are doing metal sheet bending with a 1mm thick material. When you’ve got your desired geometry, you can just export the drawing or generate CNC tool paths from the contours (that’s what I did).

CNC milling

With a simple CNC milling operation, I carve and cut the part from the polystyrene sheet. I can spare myself a lot of frustration by using a 90-degree chamfering endmill to pre-carve the bending lines. Less hassle, more precision. If you don’t have a CNC handy, print the drawing on a sheet of paper and cut it manually with a hobby knife. Works totally okay, but is slightly less cool, of course.

CNC milling CNC milling

So, back to our mirror shapes. How can we make double-curved surfaces? First, we need to model something again and offset the surface by the thickness of the metallized plastic sheet. Then we can 3d-print the offsetted model as a mold for vacuum forming.

For vacuum forming you just need a few basic tools:

Thermoforming basic tools

I am using slightly undersized screw holes in the mold, so I can drill a small hole in the mirror after vacuum forming to fit a screw and permanently fix the mirror to the plastic. Glue would probably do the job as well, but the screw holes make it easier for air to escape as well, so the vacuum forming is a bit easier.

3d-printed mold

Then we just need to heat up the sheet of polystyrene, press it on the mold, turn on the vacuum and wait a few seconds till it’s hard again. Cut away the excess plastic and permanently bond the polystyrene to the mold.

Vacuum formed mirror

The resulting mirror is quite okay when it comes to precision, pretty good in terms of reflection, and extremely good concerning manufacturing time and price.

A few caveats:

Do not use PLA! PETG works okayish with a few extra perimeters and anything that’s more heat tolerant works even better. In any case: If your plastic sheet transfers too much heat into the printed mold, it’s game over so do not overheat the sheet.

Stretchtest

The metallized polystyrene can handle a bit of stretch but at some point it will rip. In most cases that’s probably not an issue.

Stretchtest

Pico Projectors for Raspberry Pis

When building prototypes that require tiny projectors capable of projecting an okay-ish image over short to mid-sized distances, finding something decent is not easy. In my case I needed something that is as small as possible, has a wide field of view and should ideally be compatible with a small Raspberry Pi Zero.

Texas Instruments DLP LightCrafter Display 2000 EVM for BeagleBone Black

LightCrafter Display 2000 EVM

Thats an evaluation kit for the smallest of the TI “LightCrafter” projector units, meant to be used as a Beaglebone Black hat. Luckily with an adaptor PCB this can easily be adapted for a Raspberry Pi as well. The Raspberry Pis GPIO pins can be repurposed as a parallel display interface (DPI) to get the image data to the projector, so no HDMI is required.

Pinout and I2C commands to configure the projector interface can be found on the website of Frederick van den Bosch Another very nice build that includes an adapter board made by MickMake can be found on MickMake’s website

Keep in mind: we are talking here about a DLP projector, so manual adjustment of the focus plane is necessary.

focus plane lever

This can be done with this ultra-unhandy tiny lever that moves a part of the optical assembly (there is no way to fix it in position).

Ultimems HD301-A2

Available via Chip1Stop, or rebranded as a Nebra Anybeam Developer Kit.

Ultimems HD301A2

A tiny laser projector, running at 1280x720 pixel. Full-sized HDMI input, requires 5V/1.5A via micro USB.
To save a lot of space, an HDMI-to-FFC adapter comes in handy, but may degrade the HDMI signal.

FFC HDMI

The tricky part is getting the HDMI settings right:

In ‘boot/config.txt’ the HDMI modes can be set. The Ultimems chipset supports (among others):

mode 4: 640x480@60Hz
mode 8: 800x600@56Hz
mode 14: 848x480@60Hz 
mode 85: 1280x720@60Hz

mode 85 results in some nasty glitches with cheap adapters and long FFC cables. That’s what worked well for me:

hdmi_force_hotplug=1
hdmi_drive=2
config_hdmi_boost=4
hdmi_group=2
hdmi_mode=14

One nice advantage of having separate modules for projector and control stage is to be able to just fold it for a close fit (removing the projector stage with the TI LightCrafter 2000 EVM is a pain since the cable is connecting cable is quite stiff).

Ultimems folded

drawing1 drawing2

Custom Raspberry Pi Camera Cables

Sometimes one just needs a custom flat flex cable. In my case this was a Raspberry Pi Zero camera cable. A quick search told me that flex PCBs have fancy stuff like polyamide stiffeners to make certain parts more … stiff (obviously). This increases the PCBs thickness slightly so connectors are chosen to accomodate that. Flex PCBs are apparently really expensive. Not so much the per-unit price but the base price. PCBway charges about a hundred USD minimum. That’s slightly too expensive for my little test project.

Luckily OSHpark is offering a flex PCB service as well at 10 USD per square inch, exactly twice as expensive as their regular PCBs. Sadly, OSHpark flex PCBs come without stiffeners. Luckily, … I came across this handy tweet. Add a copper area on the backside of the connector part and you’re good. Except that the ZIF connectors used for Pi cameras require 0.3mm thickness.

What worked for me well was adding two layers of Kapton tape (which is basically the same group of chemical compounds as polyamide) and trim the excess with a pair of sharp scissors.

customFFC customFFC2

Not pretty but works like a charm.

A Giant Map drawn with a Pen

Header

For quite some time I entertained the thought of having a wall-sized world map. Maybe it started when I read the blog post by Dominik Schwarz about his map, maybe a bit earlier. Exactly as Dominik I soon realized it’s really hard to buy a map which is neither ugly nor has a low resolution/amount of details.


Heezen-Tharp Bathymetry map Heezen-Tharp Bathymetry map crop japan

There are a few maps I enjoy visually and probably on the top of the list is the bathymetry map of Heezen and Tharp, drawn by Heinrich Berann. While the sea floor depth data itself is accurate and this is meant to be a scientific documentation, the map and its status as an art object benefits a lot from the artists execution.

Heezen-Tharp Bathymetry map crop europe

However, the process of getting from information to image involves in this case a manual artistic process. This is especially visible when having a look at the continental shelf.

I for one would prefer something a bit more automatic, after all I am not a professional illustrator. Like all things in life, something is considerably more complicated the closer you look at it and this is true for the process of turning geo-info into a map at a certain zoom-level. Here is a nice blog post reflecting on how much better Google Maps handles all these intricate details compared to other map software.

I experimented a bit with different map data sources, thought about inkjet printing and wrote a script to scrape google maps tiles and stitch them. That works kinda okay-ish, but the the classic Google Maps style did not really fit. For OpenStreetMaps you can find different map styles, the most beautiful (at least in my opinion) of those is Stamens Toner.

Stamen Toner

But using either google maps or Stamen’s Toner OSM tiles, this would basically result in me trying to print the largest image possible. But maybe Inkjet printing is not the best way to move forward. So, given our means of production how can we get a really large wall-sized world map? I did built some drawing machines (pen plotters) in the past, so why not do that?…

Data:

First step: how to get the data? Best data source is OpenStreetMap. No discussions. However, a dump from the OSM DB (the planet-file) is a whopping 50GB (compressed) and 1200GB uncompressed! Whew… No. Luckily, there are already pre-processed extracts for certain features and that’s all we need. Namely the land- and water-polygons, the coastline, a selection of cities including meta-data (population) and … the ocean floor depth. Luckily one doesn’t need to scrape Digital Terrain Model data by hand, but can rely on GEBCO.

On a glance:

Main problem: when working with pen plotters, we don’t deposit tiny droplets of ink based on pixel data, but draw lines with pens, based on vector data described as coordinates. So our whole image is basically just a gigantic assortements of lines. Drawing maps with pixels has one huge advantage: you can overwrite pixels. You draw the oceans, on top of that the land, on top of the land the streets, etc. When using pens and lines, you see if there is a line underneath the line underneath another line.

So when processing each polygon on each layer needs to be subtracted from all underlaying layers, repaired, simplified and written to an SVG file. That took me a while to figure out. Luckily there is Shapely, an incredibly well working python library for polygons. After creating a map, all the lines need to be sent to the pen-plotter in a order that makes sense. Drawing with the pen is quite fast, but lifting the pen to move to the start of the next line is extremly slow compared to drawing. So main objective is to optimize the pen’s path to minimize pen lifting movements and the travel distance in-between lines. Luckily, a long time ago, when starting to study computer science there is usually a course like “Introduction to Algorithms and Data Structures” (which almost all freshmen hate). The problem of “walking” along a certain set of streets in between cities (line start and end points) while taking the shortest route is the Chinese Postman Problem. So what’s the minimum number of edges that need to be added to graph to traverse the graph on the shortest route while visiting every node exactly once? Yeah, now do that for two million nodes…
Ok, it worked well for smaller graphs but in the end it was very little fun to optimize my crude implementation of an algorithm solving this problem and I dropped it. The greedy approach, however, did work well: draw a line, check which line’s start or endpoint is closest to the current position and draw that line. That seemed to be about 10 percent slower than the almost optimal solution, but heck, I just want to have something working well enough to get it done.

Hardware:

When I started building the plotter quite a while ago I was rather concerned about the size of the machine and how I could stow it efficiently. The plotter is made from Openbuilds V-Slot and 3D-printed connection elements. It’s a rectangle with belts in CoreXY-configuration and when removing the belts and undoing 2 screws per corner the plotter can be disassembled in a matter of minutes.

Squareplot Carriage Squareplot Carriage

Popular plotter designs (Axidraw) commonly use small servos to move the pen up and down. That’s inexpensive and totally suitable for something like lifting a pen but the sounds the servo generates are extremly nasty. When using a pancake stepper and some decent drivers, the plotter is almost silent.

Squareplot Carriage

I did use the TMC5160 steppers from watterott. They are expensive as hell but extremly quiet and have a built-in stall detection that can be used for sensorless homing.

Drawbot board

To control the motors I use GRBL, but GRBL can’t make use of any of that since you need to talk to the drivers via SPI. One can either patch GRBL (uargs…) or just use a second microcontroller. The second controller talks to the driver, checks for stall detection events and then acts like a physical endstop switch (by pulling down the first microcontrollers endstop pins). Yay, if you’re too lazy for software, just throw another bunch of hardware at the problem…

Drawbot board

The plotter requires only a minimal amount of torque, so running the motors on CAT5 cables works well and is extremly convenient. Each wire-pair drives one motor phase. Several meter of cable length are not an issue and no stepper motor looses steps.

Squareplot Connector

Plotting:

To actually get the lines on the wall I initially planned to draw directly on the wall with a slightly different pen-plotter build. However, in the end I spared my neighbours weird moments hearing the mysterious sounds of scratchy pens on the other side of the wall. I settled on plotting on cardboard sheets as tiles and used 2x4 (2 by 3 meters in total) to fit the map to the wall.

Map in room

To draw text I made use of Hershey fonts, stroke-based fonts originally developed for vector displays back in the ol’ days.

Hershey

While elevation data for land is drawn as countour lines, I did not want to do the same for bathymetry (ocean depth) data. Here I used hatchings of increasing density to display deeper regions as more blue-ish in color.

hatching

That resulted in a few issues since the 45 degree hatching lines are really sensitive to placement. Every mechanical system only runs up to certain level of precision and blacklash in the drive system is one of main reasons for this. Since I am using almost 5m long belts to control the movement of the pen in the pen-plotter, there is a quite an amount of slack. Every other line is slightly too close to it’s neighbouring line. That results in a kind of banding effect for large monotonous hatching-regions.

Plotting took about a day per tile and all in all I used up quite an amount of pens:

Empty pens

A Stabilo ballpoint pen is good for about 250m of lines (quite impressive). The lines are split into files to automatically pause after a certain number of meters for pen replacement.

Afterwards the cardboard tiles are screwed to 3d-printed connectors which are sitting on the wall and allow for a bit of alignment (in hindsight this was a very good idea).

Connection

All in all I would say I am quite happy with the results:

Map detail Map detail

Hardware and software can be found on github:

Further reading:

  • There are wonderful pen-drawn terrain maps by Michael Fogleman (see here). Highly recommended if you appreciate the visual style of these kind of maps.
  • For large-scale plotting people often refer to the Polargraph. If you are new to pen plotting, you can find a lot of inspiration there.
  • When just wanting to have a stab at pen plotting without worrying too much about hardware, the Axidraw is an excellent choice. An inkscape extension allows for easy drawing and sketching. EvilMadScientist is now even selling a nice litte DIY kit.
  • The twitter hashtag plottertwitter is highly recommended.

Neutral Density Filter on CS-Mount Fisheyes

Getting a neutral density filter on a fisheye is a horrible mess. Some fisheye lenses solve this in a sane manner by having an integrated filter mount somewhere in the lens barrel (usually in between some lens elements close to the camera, not the front). That’s about the worst possible solution for the problem. Some people use tape, glue or magnets to fit ND filter foils to the rear element of their lens. For all the other lenses, there are … curved, globe filters. Since I plan to use the lens on a fixed camera, smoked acrylic domes used for surveillance cameras may be an option as well.

Acrylic Dome

But … maybe there is an alternative: the camera I use is a Raspberry Pi HQ Camera Module with a screw-thread-type CS-mount. Maybe there are screw-type filters for the mount itself?

CS-Mount

Yes! Indeed, there is one company which is selling exactly what I need: Midopt. The issue: really hard to get if you are not a business customer and do not have an address in the United States.

So, time for a homebrew solution:
I ordered a plain 20mm glass piece coated with an ND filter from Aliexpress and 3d-printed a springy friction-fit holder. Designing the holder is a bit tricky since it needs to be easily removable while keeping the glass piece from moving around (causing abrasion and chipping).

Fusion360

The holder is printed with a 0.25mm nozzle on a Prusa Mini: Slicer

Does it work? Indeed.

Filter in mount Filter in mount

Raspberry Pi Wide Angle Lens Comparison

A few months ago the Raspberry Pi HQ camera module was released. 12MP resolution, 1/2 inch (8mm) sensor, all in all: sounds ok. The interesting thing: you can change the lenses. The mount is apparently called a CS-Mount and is simply a screw thread. (Modern) CS lenses however are mostly CCTV/surveillance camera lenses and that’s a rather suspicious market if you want to go shopping (lots of moderately shady web shops).
There are two official lenses sold alongside the camera module, a 16mm telephoto-lens and a 6mm wide-angle one. Given that the module has a 1/2 inch sensor, the multiplication factor to compare to full-frame cameras would be 5.4, so the 6mm lens is equivalent to focal length of 32.4mm on a regular camera… Maybe there is something that’s a bit more … actually wide-angle?

All three lenses All three lenses

Let’s compare three lenses:

Official 6mm Lens

Inexpensive and easy to purchase. The aperture can be set manually, but there are no markings (so it’s a guessing game)

Official wide angle lens Official wide angle lens example photo


Arecont Vision CS-Mount 4.0mm F1.8 Lens

Available at B&H-photo, rather pricey but small.

Arecont 4mm lens Arecont 4mm lens example photo


No-name 3.2mm F2 Lens

Available at several online shops with various brandings or directly from Aliexpress.
Very long barrel, wide field of view. Nice crisp and sharp image without a lot of chromatic abberations. Distortion is very visible.

No-name 3.2mm lens No-name 3.2mm lens example photo


Side by side:

side by side comparison (100% crop, Official Wide Angle | Arecont 4mm | No-name 3.2mm)

Digital Solargraphy

Update: a number of people got in contact with me after writing this blog post inquiring how exactly they can get or build their own camera. The hardware and software described below is not very compatible with people that are not exactly me. I did change a lot in the meantime and now I am quite confident that the whole thing is a bit easier to use. I did set up a page to describe how to get your own kit for assembling the new camera:

If you want to capture your own digital solargraphy images, have a look: Digital Solargraphy

Solargraphies (pinhole images on photograhic paper that capture months of the sun arching across the horizon) were a thing starting sometime in the 200Xs (the first decade of the century(?), whatever…). When this caught on broadly in the early 201Xs it got a lot of people excited for film again. Quite a few people apparently started dropping cans with paper and pinholes in woods and the public urban space and I very much like this idea.
Solargraphy.com (run by Tarja Trygg) is collecting hundreds of wonderful examples.

A few other relevant links:

  • Interview with Tarja Trygg: lomography.de
  • Interview with Jens Edinger about how to build (and hide) pinhole cans [in german]: lomography.de
  • Flickr Solargraphy Group: flickr
  • Motorized Solargraphy Analemmas: analemma.pl
  • People are even doing timelapses with them: petapixel.com
  • one of the very few examples (actually the only one I could find) of digital day-long sun exposures: link
  • Some Solargraphies I very much like are from Jip Lambermont: Zonnekijkster
  • Most of the analogue landscape/city images of Michael Wesely could be called Solargraphies, too.

While these pinhole cameras built from beer cans and sewer plumbing tubes have a very appealing DIY character, you can even buy them off-the-shelf by now (boo!).

No, I’m kidding. Offering pre-assembled kits makes solargraphies way more accessible and having easy-to-build hardware is certainly something this project lacks.

However, I really like film (or paper in this instance) but I got rid of all my analogue equipment. For a reason: it’s horrible to handle. So, how about doing the same but without film?

Theory

The problem:

It’s easy to create digital long exposures. Reduce the sensors’ exposure to light and let it run for a few seconds. If you want to go longer you will realize that after a few seconds it will get horribly noisy. The next step up in the game is taking many single exposures and averaging them. This way an arbitarily long exposure can be simulated quite well in software. When using a weighted average based on exposure value from the single images, even day long exposures are possible. Nice! Except that won’t work for solargraphy images. While the sun burns into the film and marks it permanently, the extremly bright spot/streak of the sun is averaged away and won’t be visible in the digital ultra long exposure. Darn…

24 hour digital long exposure:

result:

Only averaging

So, how can we solve this problem? While taking single exposures we need to keep track of the spots of the film that would be “burned” or solarized. For every image we take (with the correct exposure) we take another image right away with the least amount of light possible hitting the sensor. We assume that every bit of light that would have hit the sensor in our second, much darker exposure would have been sufficiently bright to permanently mark the film.

Lets take a step back for a moment and talk about EV or Exposure Value. A correctly exposed image done at 1s with f/1.0 and ISO 100 has an EV of 0. Half a second with the same aperture and ISO settings is EV 1, quarter of a second EV 2, … So Wikipedia lists a scene with a cloudy or overcast sky at about EV 13, a cloud-free full sunlight moment at EV 16. A standard (DSLR/mirrorless) camera reaches about 1/4000th of a second exposure time, most lenses f/22 and the lowest ISO setting is either 25, 50 or 100. 1/4000s @ f/22 and ISO 100 is equal to EV 20 to 22. So we can use EV as a way to describe the amount of brightness in a scene (if we would expose it correctly) and – at the same time – as a measure of whats the maximum amount of brightness a camera can handle without overexposing. Basically how many photons are hitting the camera and how many photons can the camera successfully block during exposure. Whats the EV value to (reliably) determine which parts of the film would have been permanently marked? Generally, as a rule of thumb: the clearer the sky, the less clouds, the less haze, the less particles and water droplets in the atmosphere that reflect light, the lower the max EV value of the camera may be. So, can a camera at 1/4000s with aperture 22 and ISO 100 capture so few photons that we can assume that a certain part of the image is extremly bright: sometimes. Every piece of cloud that gets backlit by the sun gets incredibly bright and if the camera is not able to step down/reduce the brightness sufficiently it’s impossible to reliably determine if this spot would have been bright enough to leave a mark (spoiler, it wouldn’t, but it’s impossible then to differentiate between a bright cloud and an unblocked view of the sun.) To step down to EV 20 suffices only for very clear days, if unknown conditions are to be expected (nearly always in europe sadly), then at least 24 is required in my experience.

However, there is an easy way to move the window of min/max possibly capturable EV values by the camera: a neutral-density filter. That reduces the amount of light that hits the sensor considerably, so the camera won’t be able to capture images in the dusk or dawn or the night, but that’s not a problem in our case since these images wouldn’t be relevant for a multi-day long exposure anyway (compared to the bright daytime their impact on the overall image is negligible). When using a ND64 filter (64 or 2 to the power of 6) it takes away about 6 EV (ND filters are never precise) and thus gives us 26 as the max EV value. How does that look?

Correctly exposed image (EV: 11) ND filter comparison

Slightly darker (EV: 14) ND filter comparison

Close to what most DSLRs achieve out of the box (EV: 19) ND filter comparison

Aaaand here we go (EV: 26) ND filter comparison

Does that suffice: I would say, yes.

Software

So, how to process this? Take a correctly exposed photo every X seconds and a second photo at EV 26 right away too. From all the first photos the long exposure image is calculated by doing a weighted average based on metadata. We can calculate the EV value from the EXIF data of the image, apply an offset to the value and use 2 to the power of the offsetted EV value as our weight for averaging pixel values.
For the set of second images we can’t do that, we would average out all burned image sections/pixels. There we just overlay every image and keep the brightest pixels of all images.

Afterwards we take the long exposure image and burn all the bright pixels with the data from our sun overlay:

Weimarhallenpark

Terrific! But how many images are required and how fast do we need to take them?
Interval duration depends on the focal length (the wider the image, the smaller the sun, the longer the time in between images may last). In my case for a wide angle image (about 24mm) 60s seem to be the minium and 45s would be preferrable. If the interval exceeds 60s the arc of the sun is reduced to overlaying circles and finally just something like a string of pearls. One way to cheat is by applying a bit of gaussian smooting on the sun overlay image to help break up the hard edges and smooth out the sun circles.

90 second interval: artifacts (gaps are caused by a partially clouded sky which blocked the sun)

The number of images for the long exposure depends on the amount of movement but a number of 60 to 90 images works well even for tiny details.

Hardware

Tada?…

Nice. We got a feasible way of creating a digital solargraphy. Except, we need to actually take/make one. How to get a (relatively) disposable camera out there that may be snatched away by pesky birds or even peskier public servants at any moment? Some solargraphy enthusiasts report 30 to 50 percent loss of cameras when placing them out in the wild for half a year (winter to summer solistice, i.e. highest to lowest point of the sun). I won’t do six months, but being prepared for losing a camera or two might be a good idea. The smallest and least expensive camera I (you?) can build is basically a Raspberry Pi Zero with a Pi Camera Module. That’s featuring a whoppy 8 megapixels but I guess that’s ok, we don’t want this to be ultra-sharp glossy fine-art prints. Combined with some electronics for turning it on and off to take a picture-pair at given intervals, a battery, a smartphone attachment lens and some horribly strong neodym-magnets we wrap this in a 3D-printed enclosure.

hardware 1 hardware 2 hardware 3 hardware 4 hardware 5

A bit of technical details: a Raspberry Pi hat featuring a SAMD21 microcontroller (the Arduino Zero chip) draws power from two 18650 batteries and switches the Pi on every 60s (if it’s bright outside) or at slower intervals if the camera reports less light. The pi boots, takes a few images and powers off again. The system is powered by the batteries for 2.5 days, generating about 10gb of data per day. In order to be fast enough to boot the system, measure the light, take several images, save them and power off in less than 60s the pi runs buildroot, a minimal linux distro instead of the bloated Raspbian.

enclosure

Getting the 3d printed box weatherproof is the hardest challenge when building this. I’ve had good results with a seal of 3mm very soft EPDM rubber string in a 3mm cavity.

enclosure gasket CAD enclosure gasket photo

Images

Examples from Weimar:

Theaterplatz Marktplatz Bauhaus Museum Frauenplan Platz der Demokratie August Baudert Platz Schloss Unibibliothek Bauhausuniversitaet

Caveats and flaws:

To determine burned parts/pixels I use a one-shot approach. Either exposure on a single image did suffice to permanently leave a mark or it didn’t. No cumulative measure is used in any way. If there is traffic and cars in the image, this results in a low-fidelity reproduction of the behaviour of film exposures. While reflections by glass and metal of the cars would result in a flurry cloud of tiny specks of burn-ins over a long amount of time on film, the punctual noise of only a few dozen or a hundred digital exposures using the one-shot method is less appealing to the eye. A good example of how this looks on a film image is this image from Michael Wesely. But: that’s something for another day.


“I want to do this too!”

Cool! However: Some assembly required. I may write a post with some more detailed info at some random time in the future. Resources for now:

  • The software I use for stacking, averaging and peaking is on github but please be advised: it is not exactly plug’n’play.

  • Eagle board files and schematics for the 2S Lipo Battery Raspberry Pi Hat can be found here

  • Fusion360 files for the watertight enclosure can be downloaded here

Got questions? Drop me a line.