Welcome to another episode.
This time we leave traditional blacksmithing and enter the realm of high tech.
In this video, I will show you to 3D scan an old artifact like this axe head.
The technique is called photogrammetry and is also used by museums to scan and better
understand their artifacts.
The basic idea is to take many photos of an object from all kinds of angles and then use
fancy computer software to digitially reconstruct it.
I made myself a little turn table and painted everything green to better separate the object
that is being scanned from the background.
For turn table operations, the backgrounds needs to be completely masked away or otherwise
the software gets confused and cannot do the 3d reconstruction.
Before explaining the principles behind photogrammetry, let me talk a little bit about the process.
The camera is using a 50 mm fixed lens and all settings are completely manual.
The aperature is set to F22 and the focus is set manually.
The trick with the lighting is that it needs to be at the same position and angle as the
camera lens.
No matter which angle we photograph the lighting should be identical.
This axe head is a few hundred years old and heavily rusted from being exposed to the elements.
That means there are no reflective surfaces which is ideal for this.
This process would not work with a shiny metal object.
To scan reflective objects they may need to be prepared with powder or removable paint.
Another complication for the axe is that we want to scan the eye as well.
For that, I am raising the camera to multiple heights so that we can see inside the eye.
You will see in a moment that this is not a traditional video for me since I am actually
showing you screen captures from the softeware that I used to create the 3D files.
Many of you may find this not very interesting but
I hope at least it will give you an idea of what is involved; especially now that everything
3D is becoming much more popular.
So, let me take out the compact flash and then we can go down the rabbit hole.
Since I am shooting in Canon RAW, I need to first convert the files back to JPEG and also
save them as a numbered sequence of files.
Once thing I am noticing here is that a set of files was not exposed properly.
This happened when I forgot to raise my continous lights after rasing the camera.
It's ok because I noticed while filming and retook that set of photographs.
However, I don't want to use those photos for the photogrammetry process.
What I am showing now is the procedure for masking out the green screen so that only
the axe head remains as well as manual adjustments to create a better mask.
I will not explain what I am doing here but instead talk a little bit about photogrammetry.
I am no expert in this area so be forewarned that this explanation may be somewhat inaccurate.
Let's assume we have a rigid object such as the axe.
I took several different photos of the axe and when looking at these photos we can identify
identical features such as a speck of rust on multiple different photos where the axe
head moved in relation to the camera.
Let's assume we can find multiple such features - tens of different points - on multiple different
photos.
We can now look at how much these points each moved in different photos.
We know that the camera parameters are fixed and we also know that the different points
are not moving on the axe.
This information makes it possible to compute the exact camera position relative to the
axe for each photo as well as the 3D location of the feature points.
I am using Agisoft's Photoscan and it automates this whole process.
First, I need to load all the photos that I took and then tell Photoscan that it should
read the mask I created from each photo's alpha channel.
I quickly verify that this is really true and then ask the software to find a lot of
feature points common across several photos.
The turn table makes this more tricky because the camera really did not move.
However, by telling the software to find feature points only in the areas that have not been
masked, we simulate that the camera moved and the axe remained still.
The software now finds the virtual position of the camera for each photo and also computes
the 3d feature points I had mentioned earlier.
This creates a sparse point cloud that Photoscan calls tie points.
Here you can see all the camera positions and the initial point cloud that we can inspect
to get an idea if the software is undestanding the shape of the axe.
So far it looks pretty good.
The next step is to create a dense point cloud which is millions of points in 3d space that
accruately conform to the shape of the axe.
This takes quite a while to compute.
As we can see if points are so dense that we cannot really tell them apart any more.
I am quite pleased with the results as everything seems to match perfectly including the eye
of the axe.
This is really all we need.
Now, I will tell the software to compute a 3d mesh and a texture map for the axe.
The texture is really just a continous photo of all the surfaces and can be used in 3d
programs that render the axe as you saw in my little intro sequence to
this video.
The mesh looks really nice as well.
You of course may wonder why anyone would go through all that trouble.
In this video, I will 3d print a copy of the axe and also show that important detail in
the eye has been retained as well and may yield some clues about how the axe was made.
As you may be able to tell, I am quite fascinated that this process worked so well.
While not all intricate detail such as the tiny rust pits has been retained, with the
current number of photos - about 70 - a lot of detail is available nonetheless.
I was hoping that I would be able to print a full scale copy of the axe and also use
the 3d model for precise measurements, reality is intervening.
I am measuring the whole axe at roughtly 7 inches in length.
Yes, that's about 178 millimeters in metric countries.
Unfortunately, my 3d printer has a maximum print size of 6 inches.
For demonstration purposes, I am printing it anyway such a little bit smaller in size.
As with anything, things are never simple.
Here, I am finding the 3d model is not oriented appropriately.
Fortunately, the software - with some trials and tribulations - allows me to reposition
the axe on the virtual print bed.
This may be a good time to talk about the difficulties with 3d printing.
I am using a filament deposit method printer with a single print head.
That means that any areas that would hang in the air need to be supported by so called
supports that get printed as well.
The supports need to be removed and cut a way later.
Another problem with 3d printing is adhesion to the print bed.
I am using HIPS as a material which contracts as it gets colder and has a tendency to pull
away from the print bed.
I had to print the axe twice before I was able to get a successful print.
Everything you are seeing here is shown several times faster than it happens in reality.
This whole project tool about a day.
Just printing the axe head took about 3 hours.
Alright, now it looks position correctly, so it's time to start up the printer.
That said before the software can print, it first needs to compute all individual layers
which takes quite a while as well.
When looking at the layers though, we can get a good idea of how the printer is going
to put the axe together.
This also allows us to see the infill which sort of looks like a honeycomb pattern.
For the material I am using the print head needs to be heated up to 235 degrees celsius
and the print bed needs to be at 110 degrees celsius.
This is all quite hot and touching is not recommended.
I am using a Lulzbot Mini 3d printer which automatically levels the head relatively to
the print bed which makes it much easier to use.
Let's watch how the axe is slowly being printed.
The video quality is not the greatest here since I did not see up lights and this was
printing at night.
I think it's still good enough to get a sense of the process though.
As you may be able to see here, the print already separated from the table but not before
it was done.
Lucky me.
Otherwise, this would have been another failed print.
Removing the supports takes a bit of time as well.
I did not sand the model afterwards but cleaned it up as good as I could.
Here is the final result.
Given all the challenges, this looks pretty close to the original.
Let's take a look with a flash light at details in the eye as well.
I hope you liked this video even though it was a little bit different.
Thanks for everyone's patience with my slow video production and to those of our on Patreon.
As an extra goodie, I will make the 3d file of the axe available as a benefit on Patreon.
See you next time.
Không có nhận xét nào:
Đăng nhận xét