Making Flexible Lenses at Home

Have you ever felt the need to make flexible and squishy lenses from silicone? Do you only have a 3d printer and a very basic CNC mill at home? Then you may be the target audience. What I’m writing about here only works for concave molds which make convex-convex or plano-convex lenses and I admit that whole thing is certainly something of a niche topic, but I mean, that’s what the internet is for. Niche topics.

Making a lens or a lens mold on a CNC mill might not always be the best idea, but sometimes it may make sense. Especially if you are working with acrylic or aluminium as a material and your expectations are somewhat limited. Good enough is often all you need.

But, a quick disclaimer: I’m neither a machinist nor do I have a clue about optics, I am just someone with a problem and trying to solve it somehow. I’ll explain what did work well for me, if you know more about that stuff your feedback is much appreciated.

So, back to the problem: For a project I want to make some lenses from soft silicone. For that I need the inverted shape of the lens as a mold so you can pour the liquid silicone into it and once it hardens you got your soft lens.

Model of the mold in Fusion360

The problem with any optical surface that should work as a lens is that you need both a perfect overall shape and a perfect surface quality. If your shape is distorted, your focal length may be off or you get some horrible imaging artifacts. If your surface is rough and has tiny scratches you will get fogging or loose contrast.

You can buy high-quality lenses for a bit of money covering most diameters and focal lengths from Thorlabs or Edmund Optics. You just buy the negative version of what you actually need and put it into your mold. That’s pretty nice because you get your perfect shape and optically clear surface as a part of the lens for an okay-ish amount of money. The problem is: it’s glass. Borosilicate glass. And that’s about three-quarters silicon dioxide. Silicone (with an E) basically just sticks to silicone and … silicon. So itself and glass. Instead of glass lenses, you can purchase plastic lenses as well and they will be considerably less expensive, but I couldn’t really find a supplier with a decent catalog that actually sells to customers directly and not just business-to-business.

I did a few tests with mold release to coat a glass surface for silicone molding and see if I could make it work anyway, but the mold release agent (either sprayed wax or liquid wax) just messes up the optical surface (the lens on the top has been waxed, the bottom one is clean).

waxed surface

The molding and separation work fine, but the surface gets a texture so it’s not acting as a lens anymore.

So, almost done with the introduction but before I start, there are a few youtube videos on this topic that I can highly recommend:

video overview

All of them contain a ton of helpful info, but none of them did solve my problem completely, so that’s the reason why this video and post exist.

Mold Materials

First off: mold materials. I have made some molds from acrylic and from aluminium. Acrylic has a perfect optical surface on all areas that you are not machining and is a lot easier to work with. Aluminium is slightly more complicated to machine, but it’s a bit easier to polish because it’s harder.

When buying acrylic there are two varieties: GS and XT. GS is created by pouring liquid acrylic between two extremely flat glass plates. XT is extruded through a die and then pressed between rollers. GS is slightly more expensive and is only sold as plates and blocks, while XT is available in a variety of shapes. But GS has lower amounts of internal stress in the material so for milling that’s the reasonable choice. When you want to work with Aluminium: buy an alloy that’s hard enough for milling. I have used an alloy without silicon, but I would guess that trace amounts in the alloy might not really be a problem for silicone molding.


When milling – no matter the toolpath strategy – we will create some kind of steps in the material. The simple way to reduce that is to use a ball endmill.

steps and cusps

The larger the radius, the smoother the transition between the steps. And every bit of smoothness we can achieve during milling saves us a lot of polishing effort later on.

endmill comparison

I did test two carbide endmills, a 2-flute 6mm endmill, a 2-flute 8mm endmill, and a single flute 6mm endmill with a polycrystalline diamond tip. All of these endmills were brand new so the flutes should have been nice and sharp.

endmill comparison workpiece

For comparison, I am looking at the acrylic surfaces right off the CNC and surprisingly the 3mm standard endmill looks best. Maybe that’s an issue with my speeds and feeds, so take this with a grain of salt.


Anyway, let’s talk about toolpaths before moving on. I am using Fusion 360 for CAD and CAM and Fusion is basically giving us three different options:

Fusion360 parallel toolpath

A parallel toolpath, where the tool is just going back and forth in lines, varying the depth of cut.

Fusion360 scallop toolpath

A scalloping toolpath, going in circles, lifting the tool after each circle.

Fusion360 spiral toolpath

A spiral toolpath, lifting the tool continuously while spiraling.

But when looking at a few test pieces I couldn’t see much of a difference, so I am sticking to spiral.

An additional note: I am using a home-built CNC mill here. That’s certainly not a perfect machine. It has a non-negligible amount of angular error in all axes, there is a tiny bit of miscalibration in the steps per mm and it has some backlash. Basically, it’s what you would expect from a CNC machine at home. This does in a way affect what makes sense and what doesn’t. The error by the machine may be considerably larger than the difference between endmills or toolpath strategies. But the result is pretty clear: both the general shape and the surface quality directly off the CNC are not perfect. For example: I am not quite sure who’s to blame for this circular pattern:

finish from the CNC mill

Maybe it’s Fusion360’s toolpath settings, maybe it’s Fusion’s Gcode generator, or maybe it’s my CNC controller. I still need to remove a bit of material to even out the spherical surface and get rid of the scratches afterwards.


With a proper CNC or a lathe you can just go directly to polishing and the important part is only that the polishing tool follows the existing surface as closely as possible. But when the geometry is not perfect yet you need something you can use as a tool and it needs to have exactly the curvature you want in the end, just inverted or mirrored. The most precise spherical surface I could find for a reasonable price was: precision ball bearings.

steel balls intended for ball bearings

That’s hardened steel or ceramic, you’ll find quite a decent collection of different sizes and you can buy even the larger ones in small quantities pretty easily. Perfect.

Next step: to actually grind and polish the material some abrasives are needed.

silicone carbide powder

Usually people will use silicone carbide and cerium oxide for glass, maybe something cheaper like aluminium oxide for acrylic. That’s what’s used in many acrylic polishing creams. But it’s pretty hard to get small quantities of these abrasive powders. Shopping around on Aliexpress or Amazon you can find lapping pastes with diamond particles. These are pretty expensive per gram compared to powders and probably overkill, but I won’t need much so that’s what I used.

Diamond lapping paste

Now we just need to press the bearing ball against the mold and move it for a few hours. Of course, I have very little interest in doing that manually so let’s modify a machine for that. You can use whatever you got, there is very little force involved, so a cheap 3d printer with a few additional printed parts totally does the job. I am using the CNC because that was the easiest option. I just remove the spindle and screw a fourth stepper motor to the bed which can spin a piece of plastic with the ball.

CNC with rotating table addon

Instead of the spindle, I put a holder on the Z column that can slide up and down. The holder presses the mold on the ball bearing and has a ball joint so the mold can tilt. I am using a clamp to prevent the mold from spinning and a small weight on top of the holder but be careful, too much pressure prevents the parts from grinding properly.

CNC with rotating table addon

If it’s working as expected you can actually hear the grinding.

I wrote a small script to generate gcode that moves the X and Y axes in a circular pattern to change the center of rotation.

polishing pattern script

If the table would not move the ball sideways, it would look something like this:

grinding issues

the ball bearing spins and would move the lapping paste at a high speed across the circumference while the center would not move at all.

grinding issues

When I move the ball bearing while spinning I can reduce that difference a bit. That’s how it looks in action:

gif / webm / mp4

Of course, this whole setup will still not result in a perfectly even polishing action across the whole surface, but I guess it’s good enough.

gif / webm / mp4

The first and largest size, 40 microns, takes forever because the shape of the cavity is not perfectly spherical and the ball bearing needs to grind it down a lot before it makes contact evenly. But once that’s done about an hour or so per grain size is more than enough.

Once in a while, I am removing the mold, cleaning it, and putting it in front of a camera and under a microscope. Quick note: one thing that’s pretty annoying about acrylic: you shouldn’t use isopropanol for cleaning, that’s creating micro cracks in the surface. Soap and water works well. Giving it a look under the microscope is a bit tricky because microscope objectives have a very shallow focal plane and the surface we want to inspect has a pretty decent curvature. One solution to that is simply to put the lens on a motorized macro slider and do some focus stacking.

DIY microscope

More info can be found here.

microscope view

By the way: we will be looking top down on this section of the part. In the microscope image we see that the rough surface reflecting the light gets finer and finer.

gif / webm / mp4

At 7 microns (image below) the acrylic is beginning to clear up, both on the camera and under the microscope. When all the scratches from the previous grain size are gone, we can move on to the next one.

7 micron

When we go smaller getting rid of all the particles is not so easy. I lost a lot of time and I had to go back several grain sizes because I got big, fat scratches on my nice surface:

Surface polished to 2.5 microns but scratched

In the end, I just 3d-printed the part which holds the ball bearing several times and just use a fresh one when switching to smaller particles. Adding magnets to prevent the ball from moving helps to keep contamination to a minimum as well. Sometimes the lapping paste needs a bit of thinning so the oil film with the diamond particles is as thin as possible. I used WD-40 for that but probably any other mineral oil would work as well.

2.5 micron

Eventually, at 2.5 microns I stopped. The surface is not perfect, you can still see the fogging and a bit of larger scratches if you look closely. Probably there is a limit to what you can achieve here anyway. Usually, when polishing or lapping, the tool should be softer than the object you want to polish. For example, when polishing lenses people use pitch which is technically not even a solid material. It slowly deforms and exactly mimics the shape of the surface. In our case, it’s the opposite, both aluminium and acrylic are softer than hardened steel and that may be an issue at this small scale. I am very sure that the uneven material removal rate will have made this surface slightly wider around the rim. But I have neither tools nor knowledge to measure the deviation from a sphere perfectly but it’s obviously lightyears ahead of my hand-polishing attempts. I am gonna use the lens molded off this thing as a spherical singlet, so the imaging errors everywhere off-center would already be clearly visible even when the lens would be perfectly manufactured.


Last step: make it squishy! I’m not gonna talk too much about pouring silicone, there are people who explained that way better than I could. Just the basics: we mix both components of the silicone and put that in a cheap vacuum chamber to remove the air bubbles, sadly that step is not optional. Once it’s degassed, we pour it and wait a few hours before removing it from the mold. That’s all. But we actually need a silicone that’s optically clear and reasonably soft and that’s more of an issue.

related work using XP-565

A lot of research papers that did something with soft optical sensors use a platinum-cure silicone called XP-565 by the company Silicones, Inc. but that was impossible to purchase for me in Europe. Some other researchers use Smooth-On’s Solaris, that’s a potting silicone, made for sealing solar cells and electronics, so it’s reasonably clear. Sadly that stuff was just absurdly expensive. The problem in general is that many silicones you find online are in some way described as translucent, transparent, clear, or optically clear, but only in the last case you can be sure about what you get. But even then… I tested one that looks pretty good at first glance, SILGLAS 25, but it is marketed as special effects silicone because it is extremely brittle and breaks like ice.

lens made from SILGLAS25

Sadly, that’s not what I need. After a few more candidates I settled on Trollfactory’s Type 19. Clarity is sufficient, price is okay-ish and it’s mechanically robust. The only problem: it’s very viscous and a bit fast-acting.

Very viscous type19 silicone

You got about 5 min for mixing, degassing, and pouring once you added the catalyst to the base component. Anyway, if you are in Europe, that may be your best choice.

But the problem with highly viscous silicone is getting rid of the trapped air. That’s true for both the bubbles created by mixing the components and the air trapped in the mold when pouring. I decided that I don’t want to use any thinner for the silicone and just fill the mold in the vacuum I need for degassing anyway. That’s not perfect since the vacuum pump I am using does only achieve about 90% of a vacuum or so, but it helps. For that, I built this simple contraption of a cup clipped to a servo motor that perfectly fits inside my vacuum chamber. It’s a Raspberry Pi running on a powerbank, so I don’t need to put any holes in the vacuum chamber for cables.

vacuum pour machine

Bonus: there is a camera to actually watch the pouring progress in the acrylic mold. I am using an Arducam Hawkeye camera for the Raspberry Pi that actually supports autofocus. Pretty convenient for testing. So I just measure and mix the silicone and put it in the chamber. The vacuum pump takes about a minute to empty the chamber and the bubbles start rising on the surface. Once enough air is gone, I tell the motor to move the cup and pour the silicone into the mold. There are still a few bubbles, but doing a two or three cycles of pressurizing and removing the air again gets rid of those as well.

during pouring

Once that’s done, it’s waiting for a few hours, and then removing the lens from the mold. I am using a 3d-printed part as a spacer for this mold because only top and bottom need to be really precise and I don’t care about the optical quality of the side of my lens. All parts of the mold are aligned with dowel pins and fixed with screws. That was the simplest design I could come up with.


So, that’s my flexible lens:

uncompressed flexible lens compressed flexible lens

You can see a slight white-ish tint, that’s the cheap silicone. The lenses look pretty clear, the liquid silicone of course will be a bit forgiving and not perfectly replicate the tiniest scratches found in the optical surface. Everything smaller than a micron or so will probably be gone, maybe a bit more given the viscosity of this particular silicone. That’s a part of the reason for my “good enough” approach. Another reason is that the resulting lens will be flexible, so the dimensional accuracy requirements are kind of low anyway.

lens concept drawing

In case you are wondering what this lens is supposed to do: The first surface of the lens (the one on the left) has a focal length that is equal to its width. Any light reflected by objects touching the other surface will be exiting the lens collimated. Basically, that means that we got a squishy magnifying glass that works at any distance as long as your camera or your eye is focussed at infinity.

Taking it from here:

Given this technique we can make optical quality (to an extent) molds for silicone. But if you made the part from acrylic you can directly use it as a lens with a negative focal length as well. If you made it from aluminium, you can use it as a tool to grind and polish another CNC-milled part with a positive focal length. Errors may add up and increase in the process, but you get the gist.

Usually I am publishing CAD files and code as well for my projects because someone might find it useful. In this case I am not doing that because the attachments to my CNC mill or the makeshift vacuum pour contraption will only work with the stuff I use. But sometimes (especially when global supply chains go haywire) it’s hard sourcing the part one needs, that why I am listing this:

Where did I get the materials?

I am located in Germany, depending on where you live the following section may be helpful or not work at all for you. Some of these links will slowly become obsolete over the next weeks, months, and years. These are not affiliate links, I do not earn any money if you buy something.

Part Details Price Link
Vacuum chamber + pump Generic vacuum pump VP115 155.75€
Trollfactory Type 19 Clear silicone, Shore A19 35.90€
Acrylic glass Acrylic plate GS - 10mm 7.50€
Endmill Ball Carbide, 2 flutes, 6mm shaft, 3mm radius 4.30€
Endmill Diamond Polycrystalline diamond, 1 flute, 6mm shaft, 3mm radius 31.66€
Diamond lapping paste 40-0.5micron, 12 grain sizes 12.00€ (can be bought directly from Aliexpress as well)
Ball Bearings 15mm / 60mm, hardened steel (1.3505), precision G40
Magnets Ferrite pot magnets, CSF-25 (for the large ball) 7.75€
Magnets Neodymium ring magnet, 10/7 mm (for the small ball) 4.40€

Microscope Focus Stacking for Part Inspection (on a Budget)

Recently I found myself on a yak shaving detour in dire need to look at some very small things. In my particular case: to assess scratches after consecutive grinding and polishing steps of a lens mold. Basically, I polished a cavity and needed to know if the scratches of the prior grind size have been removed by the current grain size so I can move on to the next one. For that, I needed some cheap microscope objectives with a bit of magnification (4x to 10x). The problem: the focal plane of these objectives is very, very shallow. Since I want to look top down at the curved cavity meant for molding lenses, I needed to cover quite a bit of distance.

acrylic glass that should be inspected during polishing

The simple solution to this problem is focus stacking. Just take several images while moving the microscope lens just a fraction of a millimeter and combine only the sharp areas of those images afterwards.

There are a few noteworthy microscopy projects out there that work towards more accessible and open-source imaging:

The OpenFlexure project developed a 3-axis movement stage with micrometer precision, based on deforming a cheap 3d-printed part. They have some options for microscope objectives based on cheap Raspberry Pi cameras and that stuff seems to work well for biology samples, ie. looking at tiny cells on a small glass slide with 50-100x magnification.

UC2 (or you see, too), is a wide collection of building blocks (like, literally blocks) for modular microscopes. Documentation looks pretty extensive and they seem to have a lot of options for rather fancy imaging techniques.

However, both projects did not really have a simple solution for my particular problem (large parts, low magnification), so I set out to build my own very simple microscoping rig.

the microscope

For that a bit of electronics, hardware, and software is necessary:

Electronics & Sensor

The first version of the DIY microscope made use of a Sony A6000 with a 3d printed spacer for the microscope lens. This did work fine and the large APS-C sensor covered quite a field of view with the microscope objective. I controlled the camera using gphoto2 from a Raspberry Pi. But moving the camera, taking a picture, and loading it from the internal storage to the pi is slow and tedious. In addition to that, I don’t want to torture the mechanical shutter of the camera excessively. The easier solution is using a Raspberry Pi HQ camera module and 3d-printing a spacer for the objective. The sensor is considerably smaller so the field of view is narrower, but it’s faster, is controlled directly by the pi, and doesn’t have a mechanical shutter.

The motor responsible for moving the microscope objective closer to the camera is controlled by a Fysetc E4 running fluidNC. That is basically a grbl-compatible firmware for ESP32 boards instead of the good old AtMega328. The big advantage of fluidNC on the Fysetc E4 board is that you can just load a config file with steps-per-mm and motor current settings and fluidNC takes care of configuring the very silent TMC2209 stepper drivers on the board accordingly. It’s a very versatile combination that requires minimal effort.

I am not using a gcode sender but my own camera slider python script, which works well for this simple job. General procedure: use picamera to take an image, tell fluidNC to move the Z axis by a few steps, take the next image, and so on…


the macrorail I use as a base

As a linear actuator, I am using components I originally intended for a macro rail (which kind of makes sense). A 270mm 40x20 aluminium extrusion with a 250mm MGN12H linear rail from Aliexpress. A 1.8-degree stepper motor with a GT2 belt spins an 8mm ACME leadscrew. The pulleys for the GT2 belt have a 3:1 gear ratio, and the leadscrew has a 2mm pitch. This gives us a theoretical resolution of 3.33 microns per step.

The 3d-printed tube holding microscope objective, camera module, and Raspberry Pi are clamped with an Arca Swiss plate to the linear rail carriage. Both Arca plate and clamp are from Mengs Photo, my preferred purveyor of low-price photo parts.

a micronstage with retrofitted stepper motors

The X and Y axes are controlled by a micrometer XY-stage from Aliexpress. I got the Idea and purchase information from UC2, you can find them in the UC2-MicronStage repository. However, mostly I am just moving them manually to set the position once, not making use of the stepper motors.

The microscope objectives I use are just some cheap generic ones from Amazon for about 20 Euro each (4x, 10x). They come with an RMS thread as their only mounting option, so they need to be screwed either in an RMS adapter or an RMS thread needs to be cut into a machined holder (Thorlabs is selling the correct tap). I just printed a sufficiently similar thread with the 3d-printer and used a bit of force, so the metal thread on the objective cuts its way into the plastic. Not the most elegant solution but it works well.


RGB ringlight mounted around the microscope objective with an additional diffusor

I tried to be clever and use an RGB ringlight mounted around the microscope objective. By selectively mixing colors and controlling the direction of light you can highlight the direction of scratches.

gif / webm / mp4

The problem is that the geometry of the ring light (position and diameter) only work well in a rather narrow set of situations and even then it is slightly dim.

RGB ringlight mounted around the microscope objective with an additional diffusor

While the camera sensor could just expose a bit longer and compensate for the dim LEDs, that’s a pain with the Raspberry Pi camera driver, so I tried to avoid that. The simplest solution was just too use a very bright lamp (1000 lumen) and flood the whole area with light at an angle.

IKEA lamp as flood illumination


I tested a few pieces of software in the process of finding something that works for me:

  • Helicon Focus is apparently the first choice for many macro photographers. It’s fast and results are okay, however, the cheapest lifetime license was 119 Euro at the time of writing and that’s a bit too expensive for what I intent to use it for.
  • Zerene Stacker has a 30-day trial and I was pleasantly surprised. Very easy to use, reliable, and has keyboard shortcuts to speed up the workflow. However, it’s a bit expensive for my taste as well.
  • ImageJ is the well-established (Java) software for microscopy image analysis. However, it is horrible to use. Fiji (Fiji Is Just ImageJ) is an installer and add-on bundle that should make using ImageJ more convenient and it does alleviate the pain a bit, but only a bit. I tried focus stacking using the Extended Depth of Field plugin by the Biomedical Imaging Group. It looks pretty fancy at first glance and there are a lot of features like 3d depth maps, but I couldn’t find settings for the built-in algorithms that produced stacked sequences of the same quality as Helicon Focus or Zerene Stacker.
  • hugin and enfuse, two open-source tools made for HDR and panorama image processing, did work very well in the end. I am using align_image_stack from hugin for aligning image stacks (no surprise, here) and enfuse to combine the in-focus regions. The results are not as robust when input data is bad, but I can easily use a shell script for batch processing and save a lot of work processing dozens of image stacks.

Excerpts from my shell script:

Align images:

$APPDIR/align_image_stack -v -m -a aligned $1/*.jpg

Combine aligned images:

$APPDIR/enfuse --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --hard-mask --contrast-window-size=9 --output=$OUTPUT_PATH/$2$DIR_NAME.jpg $TMP/*.tif

APPDIR is on my mac APPDIR=/Applications/Hugin/tools_mac when hugin is installed via the official installer.

That’s certainly not the most convenient workflow for people who would rather use a graphical user interface, but for me that’s perfect. When I do a dozen polishing steps and want to have a focus-stacked image in between each one, I don’t want to manually copy images, click on five buttons in the user interface and assign a name to the resulting image. I can write a script for that once and don’t need to worry later. The only truly manual step: While the Zerene Stacker and Helicon Focus seem to manage that fine, enfuse is a bit susceptible to images in the stack that are completely out of focus. If you have a bit of overshoot in your image stack, ie. move the camera beyond the object and there is no part in the image that is in focus, you may need to delete these manually. If you don’t do that, aligning and stacking might fail.


Aligning and stacking a sequence of 24 images:

gif / webm / mp4

The result looks quite usable:

stacked image

A Digital Toy Camera


It’s silly. It’s slightly impractical. It’s a toy camera. More about that here.

Lighthouse Lamp

Just as an afterthought to the Geodesic Light: The Lighthouse Lamp

Lighthouse Lamp

Recently I had a bit of fun playing around with an ultra-big nozzle for 3d-printing. The nozzle in question is a Bondtech CHT nozzle with a 1.2mm opening. This allows to print clear PETG with these very visible ultra-thick lines:

Colorfabb PETG clear on Prusa Mini Colorfabb PETG clear on Prusa Mini

What I am printing there is a circular Fresnel lens as a lamp shade. The lamp shade is working as a diffusor and a lens at the same time.

Fusion360 cross section


Running at full intensity.

Ikea Tradfri bulb

The bulb is an Ikea Tradfri “smart” bulb. Being able to dim the light output is a nice extra.

CNC-cut wooden base

Printed top

Geodesic Light

I felt like making a stupid lamp and that’s how it looks like. More about it here: /thing/geodesiclight

Perlin noise for 3d-printed parts

Recently I spent a bit of time thinking about visually improving non-functional areas of a 3d-printed part. Some generated pattern which could be imprinted on some parts of the object while not creating any issues with geometries that are required for functionality and still being (somewhat) printable.
Disclaimer: I started this inquiry with very little knowledge about 3d stuff (point clouds, meshes and surface reconstruction algorithms) and there may be way better solutions if you’ve got a basic understanding of these topics.

What I ended up with is Perlin noise. That’s a pretty simple way of generating continuous noise patterns on a plane, in a 3d space or any other dimension. In the two-dimensional case you get a pretty nice landscape-like output with hills and valleys (but no caves, no overhangs). That’s one of the many usecases of Perlin noise: generate landscapes in games.

perlin noise example

Alternatives to classic or improved Perlin noise are apparently Value noise and Simplex noise, but I just went with the classic flavour. The hard part is understanding the algorithm since there are a lot of explanations of varying quality on differnet algorithms (new and classic). Picking and combining explanations from the posts by Adrian Biagioli and Raouf did work out somehow.

I refactored a bit of code from StackOverflow (as one does) with a slightly different set of gradients. (Python code is available here)

Once you’ve got the algorithm running you get a set of Z values for an XY coordinate grid. How do we make anything 3d-printable from this data? The problem is that STL files are polygon meshes with vertices, edges and faces, but all we’ve got at this point are raw coordinates.

Now we can either generate meshes by directly creating polygons in after computing the noise, or we can continue working with points.

Option A: Meshes

To obtain a mesh, we just connect every set of 4 points to two triangles. The script generates an STL by specifying a filename.

python3 -x 100 -y 100 -z 10 -s 3 --output-stl mesh.stl --surface-only

Option B: Point Clouds

If we continue with points, we basically got a point cloud. Let’s look at that:


python3 -x 100 -y 100 -z 10 -s 3 --output-xyz --surface-only

The most convenient software for visualizing point clouds I could find is MeshLab. I did write the XYZ coordinates of my perlin noise computation to a file, one coordinate tuple per line. MeshLab can open that via File > Import Mesh. meshlab screenshot, points only

The nice thing about MeshLab is that it comes with a set of common algorithms for point cloud/mesh problems.

Apparently the correct term for getting from a point cloud to a mesh is “Surface Reconstruction” and the most straightforward way of doing this is a Screened Poisson algorithm. One requirement for that is to have the normals for all points and MeshLab can compute that easily by selecting Filters > Normals, Curvatures and Orientations > Compute normals for point sets.

Now one can just run Filters > Remeshing, Simplification and Reconstruction > Surface Reconstruction: Screened Poisson and hit Apply.

meshlab screenshot, mesh

That looks already pretty good! Apparently the algorithm creates a bit of padding at the edges of the point cloud, but that’s not a show stopper. The problem is that our mesh is not actually a body but just a surface.

Maybe there is totally conventient way of just extruding this and remeshing or something similar, but I did not find an easy way to do this. What I did instead is change my Perlin noise script to just create point coordinates for “walls” on all four sides and a bottom.

meshlab screenshot, complete mesh

Same steps as before and then hit File > Export Mesh As and select STL. And now we’ve got an STL file that we could just print.

No matter in what way we created an STL file, the following steps are the same:


Prusa Slicer screenshot

But how can we use this STL file to modify another STL?

What I did was create another body in my CAD software which encompasses all the non-functional parts of the component. Everything bit of space that this body occupies could be kept or removed depending on the perlin noise output.

CAD model comparison

I exported this as an STL as well and combined these meshes with the simplest tool available: boolean operations in OpenScad.

OpenScad screenshot

union() {

The preview looks pretty awful because OpenScad (or CGAL) is not able to deal well with meshes that have overlapping points/faces. The output is not perfect, but can be repaired with a mesh repair tool or a slicer.

Loading the resulting STL in the slicer looks like this:

Prusa Slicer screenshot

To be able to actually make the perlin noise pattern printable upside down I did cut off all noise values >= 0 (only the valleys, not the hills remain).

So, how does the print look like?

Single Lens Pi Camera image


You can find the script on github.