Perlin noise for 3d-printed parts

Recently I spent a bit of time thinking about visually improving non-functional areas of a 3d-printed part. Some generated pattern which could be imprinted on some parts of the object while not creating any issues with geometries that are required for functionality and still being (somewhat) printable.
Disclaimer: I started this inquiry with very little knowledge about 3d stuff (point clouds, meshes and surface reconstruction algorithms) and there may be way better solutions if you’ve got a basic understanding of these topics.

What I ended up with is Perlin noise. That’s a pretty simple way of generating continuous noise patterns on a plane, in a 3d space or any other dimension. In the two-dimensional case you get a pretty nice landscape-like output with hills and valleys (but no caves, no overhangs). That’s one of the many usecases of Perlin noise: generate landscapes in games.

perlin noise example

Alternatives to classic or improved Perlin noise are apparently Value noise and Simplex noise, but I just went with the classic flavour. The hard part is understanding the algorithm since there are a lot of explanations of varying quality on differnet algorithms (new and classic). Picking and combining explanations from the posts by Adrian Biagioli and Raouf did work out somehow.

I refactored a bit of code from StackOverflow (as one does) with a slightly different set of gradients. (Python code is available here)

Once you’ve got the algorithm running you get a set of Z values for an XY coordinate grid. How do we make anything 3d-printable from this data? The problem is that STL files are polygon meshes with vertices, edges and faces, but all we’ve got at this point are raw coordinates. Basically a point cloud. Let’s look at that first:

The most convenient software for visualizing point clouds I could find is MeshLab. I did write the XYZ coordinates of my perlin noise computation to a file, one coordinate tuple per line. MeshLab can open that via File > Import Mesh. meshlab screenshot, points only

The nice thing about MeshLab is that it comes with a set of common algorithms for point cloud/mesh problems.

Apparently the correct term for getting from a point cloud to a mesh is “Surface Reconstruction” and the most straightforward way of doing this is a Screened Poisson algorithm. One requirement for that is to have the normals for all points and MeshLab can compute that easily by selecting Filters > Normals, Curvatures and Orientations > Compute normals for point sets.

Now one can just run Filters > Remeshing, Simplification and Reconstruction > Surface Reconstruction: Screened Poisson and hit Apply.

meshlab screenshot, mesh

That looks already pretty good! Apparently the algorithm creates a bit of padding at the edges of the point cloud, but that’s not a show stopper. The problem is that our mesh is not actually a body but just a surface.

Maybe there is totally conventient way of just extruding this and remeshing or something similar, but I did not find an easy way to do this. What I did instead is change my Perlin noise script to just create point coordinates for “walls” on all four sides and a bottom.

meshlab screenshot, complete mesh

Same steps as before and then hit File > Export Mesh As and select STL. And now we’ve got an STL file that we could just print.

Prusa Slicer screenshot

But how can we use this STL file to modify another STL?

What I did was create another body in my CAD software which encompasses all the non-functional parts of the component. Everything bit of space that this body occupies could be kept or removed depending on the perlin noise output.

CAD model comparison

I exported this as an STL as well and combined these meshes with the simplest tool available: boolean operations in OpenScad.

OpenScad screenshot

union() {
    difference(){
        import("original_part.stl");
        import("allowed.stl"); 
    }
    intersection(){
        import("perlin.stl");
        import("allowed.stl"); 
    }
}

The preview looks pretty awful because OpenScad (or CGAL) is not able to deal well with meshes that have overlapping points/faces. The output is not perfect, but can be repaired with a mesh repair tool or a slicer.

Loading the resulting STL in the slicer looks like this:

Prusa Slicer screenshot

To be able to actually make the perlin noise pattern printable upside down I did cut off all noise values >= 0 (only the valleys, not the hills remain).

So, how does the print look like?

Single Lens Pi Camera image

Raspberry Pi Power Via USB

Sometimes it’s not possible or really tedious to get a USB cable to the USB connector for power input on a Raspberry Pi. Since the 5V pins, the USB power connector, and the USB hub share the same power rail, it doesn’t matter where the electrons enter and exit. The only difference is that the Pi has a few capacitors, a resettable fuse, and a diode directly behind the USB input. When powering the Pi via the 5V pins on the 40pin header, this protection and the capacitors to deal with sudden power draws won’t work. This applies as well to the USB hub.

To make back-powering the Pi via the USB hub a bit more convenient, I made these backpower adapters that contain the same resettable fuse, capacitors and diode as the Raspberry Pi design.

adapter1 adapter2

You can find the schematics and EasyEDA design files here.

The ever-extending list of really weird cameras

I have a soft spot in my heart for really weird contraptions to take pictures. A non-exhaustive list of at least slightly unusual cameras which may get updated from time to time…

(in no particular order)


The SPUD - a self contained scanner camera

spud1 spud2 spud3


The Alulu Camera - The Receipt Paper Film Camera

spud1 alulu2 alulu3


The Brancopan - A 3d-printed panoramic camera that was crowdfunded to make the plans available to everyone
(made by the pretty cool Cameradactyl people)

brancopan1


The GameBoy Camera (of course) - the smallest and cheapest digital camera of it’s time

gameboycamera1 CC BY NC - Jess C on flickr

gameboycamera2 CC BY NC ND - Mario Durán on flickr


The Light L16 - A camera with 16 sensors (and 16 lenses)

L16


The Etch-A-Snap - A camera that draws its output on an Etch-A-Sketch

Digital Solargraphy or the Art of Taking a Photo for a Day

Finally managed to do a video on digital solargraphy and explain the concept a bit more visually.

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

gif / webm / mp4

Rapid Prototyping Curved Mirrors


Sometimes one may require a non-planar mirror. Usually you can do that by turning and polishing a chunk of metal on a lathe until it is so smooth that the metal works like a mirror. Or you can achieve a mirror surface by grinding a piece of glass or coating plastic in a vacuum chamber. All of that is pretty slow and expensive.

But is there maybe an easier or faster way at the cost of a bit of precision? (yes)

In general there are three different types of shapes:

types of shapes

The material I use is laminated and metallized polystyrene. Since there is already a mirror surface on the material we don’t need to coat it as a second step. And as a thermoplastic is easily deformable and at room temperature pretty stiff so it keeps its shape.

Before I settled on Polystyrene I did a quick test of different mirror-like materials:

  • Coated acrylic glass
  • Metallized polystyrene
  • PVC foil with an aluminium layer
  • and Rustoleum Mirror Spray on a PETG sheet

Comparing this works pretty easy by bouncing light against different mirror materials onto a sheet of paper. My reference material is a silver-coated glass mirror, which is pretty standard stuff and the highest quality mirror you’ll find in your household.

reflection setup

The reflection of the projected test pattern is already looking pretty good.

reflection comparison

But if we subtract the image from the reference mirror, we just see the differences, so all the tiny imperfections and errors.

reflection comparison diff

We can see that acrylic glass looks quite ok, but has a few tears or cracks in the reflection surface. Laminated polystyrene causes a bit of color banding and has some issues, but these are well distributed among the whole surface and not as local as acrylic PVC foil is just straight-up garbage and the mirror spray even worse.

So, we’ve got a winner. The laminated polystyrene is something you can usually get this at a half millimeter or 1 millimeter thickness pretty much everywhere around the world. Sometimes in small arts and crafts shops, sometimes online. One valid alternative is vinyl which may be easier to get in some countries. If you go thinner your mirror gets imprecise, if you go thicker you will have a hard time deforming the material.

So, back to the mirror: You can model that in any CAD program and just pretend you are doing metal sheet bending with a 1mm thick material. When you’ve got your desired geometry, you can just export the drawing or generate CNC tool paths from the contours (that’s what I did).

CNC milling

With a simple CNC milling operation, I carve and cut the part from the polystyrene sheet. I can spare myself a lot of frustration by using a 90-degree chamfering endmill to pre-carve the bending lines. Less hassle, more precision. If you don’t have a CNC handy, print the drawing on a sheet of paper and cut it manually with a hobby knife. Works totally okay, but is slightly less cool, of course.

CNC milling CNC milling

So, back to our mirror shapes. How can we make double-curved surfaces? First, we need to model something again and offset the surface by the thickness of the metallized plastic sheet. Then we can 3d-print the offsetted model as a mold for vacuum forming.

For vacuum forming you just need a few basic tools:

Thermoforming basic tools

I am using slightly undersized screw holes in the mold, so I can drill a small hole in the mirror after vacuum forming to fit a screw and permanently fix the mirror to the plastic. Glue would probably do the job as well, but the screw holes make it easier for air to escape as well, so the vacuum forming is a bit easier.

3d-printed mold

Then we just need to heat up the sheet of polystyrene, press it on the mold, turn on the vacuum and wait a few seconds till it’s hard again. Cut away the excess plastic and permanently bond the polystyrene to the mold.

Vacuum formed mirror

The resulting mirror is quite okay when it comes to precision, pretty good in terms of reflection, and extremely good concerning manufacturing time and price.

A few caveats:

Do not use PLA! PETG works okayish with a few extra perimeters and anything that’s more heat tolerant works even better. In any case: If your plastic sheet transfers too much heat into the printed mold, it’s game over so do not overheat the sheet.

Stretchtest

The metallized polystyrene can handle a bit of stretch but at some point it will rip. In most cases that’s probably not an issue.

Stretchtest

Pico Projectors for Raspberry Pis

When building prototypes that require tiny projectors capable of projecting an okay-ish image over short to mid-sized distances, finding something decent is not easy. In my case I needed something that is as small as possible, has a wide field of view and should ideally be compatible with a small Raspberry Pi Zero.

Texas Instruments DLP LightCrafter Display 2000 EVM for BeagleBone Black

LightCrafter Display 2000 EVM

Thats an evaluation kit for the smallest of the TI “LightCrafter” projector units, meant to be used as a Beaglebone Black hat. Luckily with an adaptor PCB this can easily be adapted for a Raspberry Pi as well. The Raspberry Pis GPIO pins can be repurposed as a parallel display interface (DPI) to get the image data to the projector, so no HDMI is required.

Pinout and I2C commands to configure the projector interface can be found on the website of Frederick van den Bosch Another very nice build that includes an adapter board made by MickMake can be found on MickMake’s website

Keep in mind: we are talking here about a DLP projector, so manual adjustment of the focus plane is necessary.

focus plane lever

This can be done with this ultra-unhandy tiny lever that moves a part of the optical assembly (there is no way to fix it in position).

Ultimems HD301-A2

Available via Chip1Stop, or rebranded as a Nebra Anybeam Developer Kit.

Ultimems HD301A2

A tiny laser projector, running at 1280x720 pixel. Full-sized HDMI input, requires 5V/1.5A via micro USB.
To save a lot of space, an HDMI-to-FFC adapter comes in handy, but may degrade the HDMI signal.

FFC HDMI

The tricky part is getting the HDMI settings right:

In ‘boot/config.txt’ the HDMI modes can be set. The Ultimems chipset supports (among others):

mode 4: 640x480@60Hz
mode 8: 800x600@56Hz
mode 14: 848x480@60Hz 
mode 85: 1280x720@60Hz

mode 85 results in some nasty glitches with cheap adapters and long FFC cables. That’s what worked well for me:

hdmi_force_hotplug=1
hdmi_drive=2
config_hdmi_boost=4
hdmi_group=2
hdmi_mode=14

One nice advantage of having separate modules for projector and control stage is to be able to just fold it for a close fit (removing the projector stage with the TI LightCrafter 2000 EVM is a pain since the cable is connecting cable is quite stiff).

Ultimems folded

drawing1 drawing2

Custom Raspberry Pi Camera Cables

Sometimes one just needs a custom flat flex cable. In my case this was a Raspberry Pi Zero camera cable. A quick search told me that flex PCBs have fancy stuff like polyamide stiffeners to make certain parts more … stiff (obviously). This increases the PCBs thickness slightly so connectors are chosen to accomodate that. Flex PCBs are apparently really expensive. Not so much the per-unit price but the base price. PCBway charges about a hundred USD minimum. That’s slightly too expensive for my little test project.

Luckily OSHpark is offering a flex PCB service as well at 10 USD per square inch, exactly twice as expensive as their regular PCBs. Sadly, OSHpark flex PCBs come without stiffeners. Luckily, … I came across this handy tweet. Add a copper area on the backside of the connector part and you’re good. Except that the ZIF connectors used for Pi cameras require 0.3mm thickness.

What worked for me well was adding two layers of Kapton tape (which is basically the same group of chemical compounds as polyamide) and trim the excess with a pair of sharp scissors.

customFFC customFFC2

Not pretty but works like a charm.

A Giant Map drawn with a Pen

Header

For quite some time I entertained the thought of having a wall-sized world map. Maybe it started when I read the blog post by Dominik Schwarz about his map, maybe a bit earlier. Exactly as Dominik I soon realized it’s really hard to buy a map which is neither ugly nor has a low resolution/amount of details.


Heezen-Tharp Bathymetry map Heezen-Tharp Bathymetry map crop japan

There are a few maps I enjoy visually and probably on the top of the list is the bathymetry map of Heezen and Tharp, drawn by Heinrich Berann. While the sea floor depth data itself is accurate and this is meant to be a scientific documentation, the map and its status as an art object benefits a lot from the artists execution.

Heezen-Tharp Bathymetry map crop europe

However, the process of getting from information to image involves in this case a manual artistic process. This is especially visible when having a look at the continental shelf.

I for one would prefer something a bit more automatic, after all I am not a professional illustrator. Like all things in life, something is considerably more complicated the closer you look at it and this is true for the process of turning geo-info into a map at a certain zoom-level. Here is a nice blog post reflecting on how much better Google Maps handles all these intricate details compared to other map software.

I experimented a bit with different map data sources, thought about inkjet printing and wrote a script to scrape google maps tiles and stitch them. That works kinda okay-ish, but the the classic Google Maps style did not really fit. For OpenStreetMaps you can find different map styles, the most beautiful (at least in my opinion) of those is Stamens Toner.

Stamen Toner

But using either google maps or Stamen’s Toner OSM tiles, this would basically result in me trying to print the largest image possible. But maybe Inkjet printing is not the best way to move forward. So, given our means of production how can we get a really large wall-sized world map? I did built some drawing machines (pen plotters) in the past, so why not do that?…

Data:

First step: how to get the data? Best data source is OpenStreetMap. No discussions. However, a dump from the OSM DB (the planet-file) is a whopping 50GB (compressed) and 1200GB uncompressed! Whew… No. Luckily, there are already pre-processed extracts for certain features and that’s all we need. Namely the land- and water-polygons, the coastline, a selection of cities including meta-data (population) and … the ocean floor depth. Luckily one doesn’t need to scrape Digital Terrain Model data by hand, but can rely on GEBCO.

On a glance:

Main problem: when working with pen plotters, we don’t deposit tiny droplets of ink based on pixel data, but draw lines with pens, based on vector data described as coordinates. So our whole image is basically just a gigantic assortements of lines. Drawing maps with pixels has one huge advantage: you can overwrite pixels. You draw the oceans, on top of that the land, on top of the land the streets, etc. When using pens and lines, you see if there is a line underneath the line underneath another line.

So when processing each polygon on each layer needs to be subtracted from all underlaying layers, repaired, simplified and written to an SVG file. That took me a while to figure out. Luckily there is Shapely, an incredibly well working python library for polygons. After creating a map, all the lines need to be sent to the pen-plotter in a order that makes sense. Drawing with the pen is quite fast, but lifting the pen to move to the start of the next line is extremly slow compared to drawing. So main objective is to optimize the pen’s path to minimize pen lifting movements and the travel distance in-between lines. Luckily, a long time ago, when starting to study computer science there is usually a course like “Introduction to Algorithms and Data Structures” (which almost all freshmen hate). The problem of “walking” along a certain set of streets in between cities (line start and end points) while taking the shortest route is the Chinese Postman Problem. So what’s the minimum number of edges that need to be added to graph to traverse the graph on the shortest route while visiting every node exactly once? Yeah, now do that for two million nodes…
Ok, it worked well for smaller graphs but in the end it was very little fun to optimize my crude implementation of an algorithm solving this problem and I dropped it. The greedy approach, however, did work well: draw a line, check which line’s start or endpoint is closest to the current position and draw that line. That seemed to be about 10 percent slower than the almost optimal solution, but heck, I just want to have something working well enough to get it done.

Hardware:

When I started building the plotter quite a while ago I was rather concerned about the size of the machine and how I could stow it efficiently. The plotter is made from Openbuilds V-Slot and 3D-printed connection elements. It’s a rectangle with belts in CoreXY-configuration and when removing the belts and undoing 2 screws per corner the plotter can be disassembled in a matter of minutes.

Squareplot Carriage Squareplot Carriage

Popular plotter designs (Axidraw) commonly use small servos to move the pen up and down. That’s inexpensive and totally suitable for something like lifting a pen but the sounds the servo generates are extremly nasty. When using a pancake stepper and some decent drivers, the plotter is almost silent.

Squareplot Carriage

I did use the TMC5160 steppers from watterott. They are expensive as hell but extremly quiet and have a built-in stall detection that can be used for sensorless homing.

Drawbot board

To control the motors I use GRBL, but GRBL can’t make use of any of that since you need to talk to the drivers via SPI. One can either patch GRBL (uargs…) or just use a second microcontroller. The second controller talks to the driver, checks for stall detection events and then acts like a physical endstop switch (by pulling down the first microcontrollers endstop pins). Yay, if you’re too lazy for software, just throw another bunch of hardware at the problem…

Drawbot board

The plotter requires only a minimal amount of torque, so running the motors on CAT5 cables works well and is extremly convenient. Each wire-pair drives one motor phase. Several meter of cable length are not an issue and no stepper motor looses steps.

Squareplot Connector

Plotting:

To actually get the lines on the wall I initially planned to draw directly on the wall with a slightly different pen-plotter build. However, in the end I spared my neighbours weird moments hearing the mysterious sounds of scratchy pens on the other side of the wall. I settled on plotting on cardboard sheets as tiles and used 2x4 (2 by 3 meters in total) to fit the map to the wall.

Map in room

To draw text I made use of Hershey fonts, stroke-based fonts originally developed for vector displays back in the ol’ days.

Hershey

While elevation data for land is drawn as countour lines, I did not want to do the same for bathymetry (ocean depth) data. Here I used hatchings of increasing density to display deeper regions as more blue-ish in color.

hatching

That resulted in a few issues since the 45 degree hatching lines are really sensitive to placement. Every mechanical system only runs up to certain level of precision and blacklash in the drive system is one of main reasons for this. Since I am using almost 5m long belts to control the movement of the pen in the pen-plotter, there is a quite an amount of slack. Every other line is slightly too close to it’s neighbouring line. That results in a kind of banding effect for large monotonous hatching-regions.

Plotting took about a day per tile and all in all I used up quite an amount of pens:

Empty pens

A Stabilo ballpoint pen is good for about 250m of lines (quite impressive). The lines are split into files to automatically pause after a certain number of meters for pen replacement.

Afterwards the cardboard tiles are screwed to 3d-printed connectors which are sitting on the wall and allow for a bit of alignment (in hindsight this was a very good idea).

Connection

All in all I would say I am quite happy with the results:

Map detail Map detail

Hardware and software can be found on github:

Further reading:

  • There are wonderful pen-drawn terrain maps by Michael Fogleman (see here). Highly recommended if you appreciate the visual style of these kind of maps.
  • For large-scale plotting people often refer to the Polargraph. If you are new to pen plotting, you can find a lot of inspiration there.
  • When just wanting to have a stab at pen plotting without worrying too much about hardware, the Axidraw is an excellent choice. An inkscape extension allows for easy drawing and sketching. EvilMadScientist is now even selling a nice litte DIY kit.
  • The twitter hashtag plottertwitter is highly recommended.