These session instructions can be used for a workshop or for introductory lessons in a half-day to two-day class aimed at students and professionals with little to no experience with 3D digitization. The instructor for the workshop(s) should divide the content as appropriate for their goals.
The content covers the basics of planning for and carrying out 3D digitization of physical objects for libraries, museums, archives, and other cultural heritage organizations. It is broad enough to be inclusive of the arts and humanities as well as the sciences where 3D digitization and modelling have become a new method of sharing and advancing knowledge. This session provides an introduction to two popular methods of 3D digitization: structured light scanning (SLS) and photogrammetry, but focuses on the latter. Students will learn the basics of digital photography and digital cameras, photogrammetric software, as well as be supplied with supplemental readings. This session may be offered by more than one library instructor in collaboration with a faculty member or a museum/archives collaborator.
This session incorporates lessons learned after five years of 3D digitization. It is comprised of separate sections: (1) the basics of two popular 3D digitization methods to provide context; (2) goals assessment, equipment planning, and budgeting; (3) basic information about digital photography as it applies to photogrammetry; (4) a detailed step-by-step workshop on photogrammetric software used in order to build a 3D model; and (5) links to useful resources, software, hardware, as well as supplemental readings and videos.
3D imaging physical objects, like anything, has a learning curve. Learning to take good, clear, focused photographs with a digital camera is a must for this workshop. Knowing the basics of a point and shoot camera is perfectly acceptable. If audience members are familiar with digital image manipulation applications such as Photoshop, GIMP, or Paint.net, they will likely be able to grasp 3D imaging workflows fairly easily.
Library instructors, IT professionals, faculty members, museum curators, archivists. One instructor should be proficient with digital cameras and photography and one or both with photogrammetric software (Agisoft Metashape, Visual SFM, or Meshroom).
Audiences for the workshop(s) could include librarians, museum professionals, archivists, disciplinary faculty, IT professionals, undergraduate and graduate students.
Library, archive, and museum 3D digitization and preservation, scientific research (e.g. archaeology, paleontology, animal morphology, evolution, vertebrate animal studies), gaming, historic preservation, increasing access to unique collections, diversifying collections.
Participants will learn:
The basic differences, pros, and cons of white light/structured light scanning and photogrammetry used to digitize 3D objects.
What to expect in terms of financial and time investments associated with both methods.
How to choose the most appropriate 3D digitization method (i.e. white light/SLS or photogrammetry), associated equipment, and software.
The appropriate settings and techniques to employ with a digital camera to perform data capture for photogrammetry.
How to use Agisoft Metashape to build a 3D model from beginning to end.
Basic archival and web delivery principles for 3D content.
Instructors should have a good understanding of digital photography, photogrammetric software such as Agisoft Metashape, Meshroom, Visual SFM, or Recapture as well as an understanding of 3D modelling, file formats, and web delivery mechanisms. Participants do not need any prior experience, but it is assumed that they have some experience with complicated software applications. While it is not required, a basic understanding of photography is useful for participants; if not possible, a good photo dataset for building a 3D model using photogrammetry software is provided. If possible, the supplemental materials, readings, and videos should be provided to workshop participants a week in advance of the workshop.
Because of the nature of 3D digitization and modelling, instructors will need to provide participants with a fair amount of hardware and software listed below. It should be noted that 3D modelling is a resource intensive process in terms of CPU, GPU, and RAM availability in a computer, and as such, you will need to have a fairly powerful computer to build 3D models in a timely fashion and to avoid crashing your operating system. If instructors do not have the time and/or resources to teach digital photography or to perform photography itself, sample data is provided for teaching only photogrammetry using Agisoft Metashape. While this can be a huge time-saver, digital photography is a required skill in 3D digitization using photogrammetry. Links to open source alternatives to Metashape are provided in the supplemental materials section, but instruction on those programs is not provided. The skills learned during the Agisoft Metashape section will transfer to all other photogrammetric software applications.
A Windows, Mac or Linux desktop or laptop meeting the following minimum system requirements:
An Intel i7 CPU or equivalent with at least 8 cores, 16GB RAM, preferably a dedicated Nvidia GPU such as a GeForce GTX980 (on-board graphics will be very slow during 3D model generation).
DSLR (digital single-lens reflex) digital camera (Canon, Nikon, Sony), point and shoot camera, or a cell phone with manual control ability. Using a DSLR is strongly preferred, but learning with a cell phone is a low budget way to get into photogrammetry without the expense of a DSLR camera.
Agisoft Metashape (available free for 30 days) or at heavy discounts for educational institutions at: https://www.agisoft.com/buy/online-store/educational-license/
7zip may be needed to unzip the supplied photogrammetry photo datasets. It is available for free at https://www.7-zip.org/. However, most Windows and Mac OSX machines have native support to unzip archives as well.
This session includes an introduction to white light/structured light scanning as well as photogrammetry. A goals assessment, cost comparison with recommendations and planning exercise are also provided. In order to learn digital photography and photogrammetric processing, a basics of digital photography explanation, a detailed Agisoft Metashape 3D model generation workshop, and finally links to resources, software, hardware, as well as supplemental readings (see Preparation and Resources) that will enhance student/participant understanding of structured light scanning, photogrammetry, and digital photography are also provided.
This overview to white light/SLS scanning and photogrammetry provides context in order to situate photogrammetry in the realm of 3D digitization for the purposes of a successful workshop. Ideally, these three sections could be delivered in a lecture or demonstration format in about two to three hours.
Despite the advances and drops in cost associated with 3D digitization, it is still relatively new and complicated. And just like there is no perfect 2D scanner for every type of book, pamphlet, photograph, slide or other 2D object, the same rings true for 3D scanning. Among the myriad choices of 3D scanners there are white light/structured light scanners, depth sensors, laser scanners, time of flight scanners, touch/probe scanners, Lidar scanners, as well as photogrammetry. To add more complication, one can get a desktop version or a handheld version of many of these different devices. So what should one choose? This session incorporates lessons learned from five years of 3D digitization practices at a medium sized university with cross-campus collaboration.
This session focuses on two popular 3D scanning methods; white light/structured light scanning and photogrammetry with specific emphasis on the latter. Why this focus out of all available methods? The answer is simple. In 2015, white light/structured light scanning and photogrammetry were the most affordable ways to digitize 3D content and achieve good results, so these methods have advanced quickly over the years. Consumers can now download an app on a cell phone to get started in photogrammetry. While this lesson does not cover cell phone 3D digitization apps, it is not a bad way to experiment with photogrammetry. There are many available in the Apple Store and Google Play for iOS and Android devices, respectively.
White light or structured light scanning (SLS) is a popular and versatile method to scan 3D objects. SLS scanners are available at relatively affordable prices both in desktop and handheld forms (see Table 1 below). An SLS scanner is a good entry point into the 3D digitization world as the hardware and software pieces offered by commercial vendors are often a one stop solution to building 3D models in a relatively affordable fashion. In addition, many of the concepts and skills employed in SLS scanning hold true in photogrammetry. SLS scanning is how the author began building 3D models and although the results are not as good as what can be achieved with photogrammetry, it’s a viable entry into this area of work at a moderate cost and generally speaking, it is a more simple procedure than photogrammetry.
A simple SLS scanner usually consists of a single light source, often a small projector, which casts a pattern of structured light (not unlike what a QR code looks like in many cases) onto an object (see Figure 1). One or more cameras are used to capture the displacement of the structured light over an object and the displacement of one bar or stripe of light to the next is calculated to form not only the location/geometry (in 3D space) of different parts of the object, but also the texture (or color information). A key piece of information to remember about an SLS scanner (as well as photogrammetry and laser scanning for that matter) is that it is a line of sight scanning method meaning that if the scanner cannot see it, that portion of the object will not be in the 3D model.
An SLS scanner works in the same way that a laser scanner works; by using triangulation to make accurate calculations of where a single point in space is located. This is fairly simple if one hearkens back to trigonometric principles. The software application associated with an SLS scanner (or a laser scanner) knows the distance from the light emitting source as well as the angle to the object being scanned. Basic trigonometry is applied and it knows the exact location of that point in space. Multiple scans are then taken around an object, often with the use of an automated turntable, the scans are aligned with one another either automatically or manually, one cleans up any erratic data captured, and one ends up with a 3D model. That, of course, is a very truncated overview of how SLS scanning works.
SLS scanners are very popular due to their relative ease of use and affordability. One can purchase an entry level SLS desktop scanner with software included (the EinScan SE) for a little over $1000 as of the writing of this kit and get started with decent results or get an intermediate level SLS desktop scanner for about $4000 for better results (see Figure 2).
White Light/SLS Scanner Equipment & Costs
Entry Level SLS
~$1200 - $2400
Table 1. Entry level and intermediate level white light/SLS scanner costs.
As with all scanners, however, they do have their pros and cons. SLS scanners are good at building 3D models of small to medium-sized objects about up to the size of a basketball or a little larger. Geometric resolution and accuracy are good, and they are relatively fast at the scanning process itself. However, texture information( that is color in 3D parlance) is not always great. In fact, two of the most popular SLS scanners (the David SLS-3 and the EinScan SE as well as the EinScan SP) use monochrome cameras which measure color in a somewhat odd fashion. The way these scanners generate texture is by measuring the reflectance from projecting red, green, and blue light on the surface of an object to arrive at what it thinks is the color. In some cases the results are good, and in others they are not. In comparison to photogrammetry, texture generated from an SLS scanner will hardly ever be as clear and crisp so if that is important to your goals, an SLS scanner is not the best choice. In addition to this, SLS scanners, as opposed to laser scanners, struggle to capture objects without many features that are highly reflective, dark, as well as transparent or translucent. For example, generating a 3D model of a black billiards ball, an object with no unique features, with an SLS scanner would be very difficult. Another example would be that of generating a 3D model of a glass vase; the SLS scanner would struggle with this object because the light would likely pass through it and refract, making geometric measurements inaccurate. In some cases it is possible to spray powder on a shiny, translucent, or transparent object to dull its surface and make SLS scanning more accurate. However, with rare, unique, or valuable objects, this can obviously be a problematic solution. As with 2D scanning, one must choose the appropriate tool on a case by case basis.
From an archival and preservation point of view, SLS scanners are problematic because the software that they come with is always proprietary to that hardware and the raw 3D files, and scans are also in a proprietary format. Certainly, one can generate a model and export it in a common 3D format (ply, obj, stl, glft, etc.) at the highest resolution settings used in the software associated with the scanner. However, those final 3D files are still derivatives of the raw original scans in that the operators of the software can and always do make numerous choices in how those files are generated. For example, data may have been removed in the data cleaning process, holes in the model will certainly be filled depending on a software setting that is user adjustable, and texture information could have been altered quite easily.
To make all of these variations somewhat easier to understand, below is an example of a fossil (in this case, a dinosaur pedal element) scanned both with an EinScan SP and with a David SLS3. Even though these 3D models represent the exact same object, you’ll notice many variations in these two models including the texture (color), the number of triangles in the corresponding meshes, as well as areas where one scanner (the David) struggled with alignment on the top edge of the fossil. In this case, the EinScan SP, a much cheaper scanner, produced much truer to life texture and a very good mesh whereas the David SLS3, produced a much darker than life texture and a questionable mesh (even though it contained nearly ten times the number of triangles and is therefore a much more detailed geometric model). Later on, we’ll revisit this object as captured with a DSLR camera and modelled using photogrammetry for comparison.
At its most basic level, all that is required to build a photogrammetric model are two overlapping photographs of the same object or area. If one is familiar with stereoviews produced during the late 19th century which can be viewed through an analog, handheld stereo viewer, one can grasp the basic concept of photogrammetry. Obviously, it’s not that simple, but the basic framework of photogrammetry is the use of overlapping photographs of the same object, taken from multiple perspectives, through which a software application can ultimately build a 3D model. Utilizing this technique, often called Structure from Motion (or SfM), a software program can analyze these overlapping photos, determine how much a camera has moved (relative to the object) to develop a 3D set of points in space (each point having its own X,Y, and Z coordinates). This set of 3D points is called a point cloud. Each of those points can also have an associated RGB (red, green, and blue) color value to apply texture (color) to the points in the cloud.
Like SLS/white light scanning, photogrammetry is a line of sight method of building 3D models. That means if the camera cannot see it or did not capture it, the software program cannot model it accurately. So, one must take numerous overlapping photos of a surface or a model and generally speaking, you want to overlap your photos by ⅔’s or 66% as seen in Figure 5 below. It’s easiest to understand the ⅔’s overlap rule when thinking about capturing a flat surface, such as a wall or a painting.
Capturing a flat object is relatively straightforward, but capturing one in the round is a little more complicated, and it can be done either by walking around the object or placing the object on a turntable if it is small and light enough. In these cases, the same basic concepts and rules apply as you still need overlap. Taking a photo at preset degrees around a circle, for example, 36 photos for every 360 degrees is one photo every ten degrees of rotation. This is fairly easy to do on a turntable (either homemade or an automated one), and it’s somewhat harder to do when one has to physically move around an object. The following figure is a basic example of this photographic technique.
Bear in mind, though, when you’re dealing with an object that has features in all three axes, you need to capture photos from all perspectives which means raising and lowering the camera and taking overlapping pictures in circuits at different angles relative to the object. The easiest way to think about it is to imagine a sphere of pictures around the object. If you’re dealing with an object that only sits in one orientation, like a statue, think of the pictures as an umbrella over the object. If it’s a true three dimensional object, like a fossil toe, you’ll need to capture all sides of the object. Generally speaking, you’ll want to take a circuit of photos with the camera at a 60 degree angle, about 45, and at about 30 degrees. You can do more and you may need to depending on the object.
See Figure 7 for an example of what those camera positions look like when capturing photographs of the same dinosaur pedal element seen earlier (from Figures 3 and 4 above).
Camera file output settings are an important element of taking good pictures and generating professional 3D models. For the best possible results, you’ll want to set your file outputs as camera raw images and process them as tifs. This does require you to export them from your camera’s raw output format into tifs. Exporting camera raws to tif can be accomplished with a number of proprietary software applications such as Adobe Photoshop as well as a number of open source applications such as Darkroom or GIMP. For the purposes of learning however, one can skip that step and set your camera to save pictures as fine (or large) jpegs. It is not recommended for professional modelling with photogrammetric applications.
Taking good and clear pictures with proper camera settings and equipment is also very important. This is as much an art as it is a science, and while it is not the purpose here to teach digital photography, this step is critical. Generally speaking, a good DSLR camera with a prime (i.e. non-zoom) lens of somewhere between 24mm and 50mm is a very good starting point. Canon, Nikon, and Sony all make very good cameras, but learning how to use them is going to be a little different in each case. A simple point-and-shoot camera can produce decent results, but it really needs to have full manual control or at least the ability to work with an aperture priority setting. For the purposes of learning photogrammetry without any real investment, even a cell phone camera with manual control (and an ~8 megapixel sensor) can yield decent results.
Here we can see the result of building a 3D model of the same dinosaur pedal element (from Figures 3 and 4 above) using photogrammetry. In this case, we used a light box, automated turntable, a Canon EOS 5D Mark III DSLR triggered by the Foldio 360 app, with a 50mm prime lens as well as Agisoft Metashape (professional version). The result is a very clean model with crisp texture information.
For the best results in photogrammetry, you need good lighting and the right camera settings. Every modern camera, and even old film cameras, have ISO (light sensitivity), shutter speed (time of exposure), and aperture settings. One must learn how to balance these three settings to achieve a good quality picture that is in focus. These three settings are sometimes referred to as “The Exposure Triangle,” and it’s an important concept to grasp in photography of all kinds (see Figure 9).
Generally speaking, you want to keep your ISO setting as low as possible, as a high setting can introduce a lot of noise into your photos. Aim for nothing higher than an ISO of 400. If you are able to use a tripod, you will have a lot more control over your exposure time (that is, your shutter speed). If you’re holding your camera with your hand, a long exposure time will usually introduce blur into your photo because you won’t be able to hold it steady while the shutter is open. With a tripod, you can keep your ISO low and lengthen your exposure time to introduce enough light into your photo without the blur. Aperture (often referred to as f stop) is also important as it controls the amount of light that enters the lens, but it also greatly impacts your depth of field. A wide open aperture is a numerically low f stop setting (for example, f/2) whereas a narrow aperture is a numerically high number (for example, f/22). A low aperture setting will let a lot of light in, but will also yield a shallow depth of field meaning that anything with length/depth will be in focus at one point, but out of focus at others. To deal with depth of field problems, you need to bump your f stop setting higher (numerically) which closes the aperture. A high f stop setting, f/11 or higher, might be necessary to keep all points of an object in focus. Generally speaking, you want to have an f stop setting of f/5.6 to f/11. Sometimes that’s just not possible, as with this workshop example (a setting of f/22 was necessary in some cases), you will have to make some compromises on your aperture setting. The only other way around depth of field issues is to employ a focal stacking system which adds a great deal of complexity and cost to the data capture.
It should also be noted that when taking photos even with a tripod mounted camera, you will want to employ some type of remote trigger for the camera if possible. This can be accomplished by using an IR trigger, sometimes via WiFi or Bluetooth, a conventional remote trigger (connected to the camera with a wire), or by using a computer (also connected physically to the camera) using the camera’s proprietary software to trigger the camera remotely. This will eliminate any blurs that physically touching the camera to take a photo induces (that is, when you push the button on the camera to take a photo, that often jostles the camera just enough to blur a photo at certain shutter speed settings). If you cannot use a remote trigger, lengthening the shutter speed will often be necessary (which will also mean that you’ll need to adjust your aperture and exposure settings).
In a studio environment, if one is focussing on smaller to medium sized objects, up to about 15 inches in diameter and up to 10 pounds, one can purchase a light box (with LED lights) and an automated turntable system that will remotely trigger an IR (infrared) equipped camera to take pictures at preset degree intervals. For about $300, a basic system from Orangemonkey, the Foldio 360 with a 25 inch lightbox, automated turntable, and LED lights is highly recommended. If you are looking to do even 50 objects using photogrammetry, this is a very worthwhile investment as it will cut down on your data capture time significantly. See Table 2 below for examples of photogrammetry equipment and costs.
Photogrammetry Equipment & Costs
Entry Level Photogrammetry
~$2800 - $3500
Table 2. Entry level and intermediate level photogrammetry costs.
$300 and up
$1100 and up
Table 3: Pros and Cons of White Light/SLS scanning vs Photogrammetry.
Instructors should present and discuss appropriate content from the overview and supplemental materials to suit their audience goals prior to the step-by-step photogrammetry workshop.
Have each participant use the basic information about the two 3D digitization methods learned during the introduction to match their budget and goals in 3D digitization.
Have each participant consider their goals with 3D digitization practices by answering the essential questions found in the Goals Assessment and Equipment Planning supplemental material. This can be pre-work for the participants to prioritize hands-on time during the workshop if needed. Ensure you reserve ~10 minutes per participant for reporting and ~30 minutes for group discussion during the workshop which can be driven by any one of the questions in the Goals Assessment and Equipment Planning document.
Reporting: Once each participant has answered all of these questions, ask them to share during the workshop what they would like to digitize, which method seems more appropriate, and why. Participants should summarize their answers in no more than 10 minutes.
Group Discussion: What are the commonalities between the different objects students are working with? What are participants’ biggest concerns? Is there an obvious preference for one method over the other and why? A discussion about question number 7, particularly 7e, should generate a great deal of discussion on its own.
See Supplementary Materials: Agisoft Metashape Workshop in Detail for step by step instructions.
Step 1: Create and save project file.
Step 1b: Familiarize yourself with Metashape interface
Step 2: Add dataset photos from the extracted 7z archives provided at: https://wyoscholar.uwyo.edu/articles/dataset/Greek_Cycladic_Figurine_photogrammetry_dataset/16543554
Step 3: Mask photos.
Step 4: Align photos.
Step 5: Gradual selection and manual clean-up procedure.
Step 6: Build dense point cloud.
Step 7: Clean dense point cloud.
Step 8: Build mesh.
Step 9: Clean mesh (if necessary).
Step 10: Scale Model (if desired).
Step 11: Build texture (if desired).
Step 12: Export model.
This section can serve as a stand-alone discussion or serve as a wrap up to the Agisoft Metashape workshop.
Now that participants have actually built a 3D model, their options for what to do with it are worth discussing. Perhaps they are interested in printing 3D replicas of what they built? Or perhaps they want to take these objects and make them digitally accessible online in a repository of some kind? Alternatively, they may be interested in using these objects in a virtual or augmented reality application aimed at undergraduate student learning. While this article was being written, the author’s own 3D digitization efforts were concentrated on providing undergraduate and graduate students access to teaching collections that they could not access easily because of COVID-19 restrictions. Another possibility is the idea of promoting and building 3D content as Open Educational Resources instead of traditional textbooks. There are many possibilities with regard to what comes next with these models, but some understanding of current trends with 3D content hosting, web delivery and archiving should be discussed.
Web delivery of 3D content is changing quickly, and there are many ways to accomplish it. There are a few popular platforms with which participants should be familiar. The two most prominent are Sketchfab and Thingiverse. The author’s home institution (University of Wyoming Libraries) has an institutional account with Sketchfab with hundreds of 3D models available for inspection and download. Sketchfab would be considered more of a popular engagement and/or teaching platform and not a digital repository. As such, it’s not recommended to rely on Sketchfab to retain your content over the long-term. One needs to ensure preservation in another system. However, it is a great and inexpensive platform for delivering 3D content. Sketchfab has a 3D web viewer, multiple model inspection functions, the ability to password protect content, grant view or download access, is capable of attaching Creative Commons licenses, as well as native support for virtual and augmented reality. One can even directly export models from Metashape (and many other) software programs to Sketchfab using their API.
Thingiverse is also another 3D model hosting platform that can be used, but the author has no direct experience with it. Participants should be encouraged to look at what others have done for inspiration in their own work on both platforms and consider the pros and cons of each for their collection or project goals.
For more research-oriented interests, the MorphoSource platform, hosted and developed at Duke University, is one of the most advanced 3D digital repository platforms available at this time. Morphosource is more oriented towards biodiversity and fossil specimens, but it is available for use by other institutions. Unlike Sketchfab and Thingiverse, Morphosource is considered a true digital repository capable of preserving content long-term as well as offering a 3D web viewer, whereas the former two are really only capable of hosting the 3D models themselves (i.e. the derivatives of the raw data).
Within the libraries, archives, and museum community, most repositories are certainly capable of archiving the files (as most platforms are somewhat file agnostic), but they usually lack a way to actually render a 3D object in a web browser (Morphosurce does however have a built-in 3D model viewer). In addition, there is very little agreement on how to archive (either within a formal digital repository or otherwise) 3D content and even more divergent thoughts on what file format(s) to store long term. Given the effort and time required to build a 3D model, regardless of the method employed, the raw scans from an SLS scanner or the raw, unaltered imagery from a photogrammetry data set should be preserved. The CS3DP and LIB3DVR groups have begun to address these issues, but it’s far from being a cut-and-dried issue at this point in time. What is fairly easy to say is that raw photogrammetry data is easier to store and archive than most other 3D scanning methods. This is because photogrammetry relies on 2D photos to build 3D models. If one were interested in archiving photogrammetric data, it would be wise to save the image datasets in tif format for reuse in the future. After going through the process of building a 3D model in Metashape, it should provide clarity that the end results of processing that raw data involves a lot of human input and choices that are very difficult to document in terms of digital preservation. In addition to this, photogrammetric software is advancing rapidly and storing the raw images will allow for potentially better 3D models to be built in the future than we are capable of now.
3D Collection Strategies, Virginia Tech University Libraries, last revised 2020, accessed November 22, 2020, https://lib.vt.edu/research-teaching/lib3dvr.html
3D Models by University of Wyoming Libraries, UW Libraries, last revised 2020, accessed November 22, 2020, https://sketchfab.com/uwlibraries/collections
Community Standards for 3D Data Preservation, CS3DP, last revised May 22, 2020, accessed November 22, 2020, https://cs3dp.org/.
Grayburn, J. et al. 2019. 3D/VR in the Academic Library: Emerging Practices and Trends. CLIR Publication 176. 139pp. https://www.clir.org/pubs/reports/pub176/
Matthews, N. A. 2008. Aerial and Close-Range Photogrammetric Technology: Providing Resource Documentation, Interpretation, and Preservation. Technical Note 428. U.S. Department of the Interior, Bureau of Land Management, National Operations Center, Denver, Colorado. 42 pp. https://www.blm.gov/documents/national-office/blm-library/technical-note/aerial-and-close-range-photogrammetric
Morphosource, Duke University, last revised 2020, accessed November 22, 2020, https://www.morphosource.org/
Orangemonkie, last revised 2020, accessed November 22, 2020, https://orangemonkie.com/
Sketchfab, last revised 2020, accessed November 22, 2020, https://sketchfab.com/
Sketchfab Help Center, Sketchfab, last revised 2020, accessed November 15, 2020, https://help.sketchfab.com/hc/en-us/articles/360025343191-Agisoft-Metashape-Photoscan-
Thingiverse, last revised 2020, accessed November 22, 2020, https://www.thingiverse.com/