Company attributes
Other attributes
Luma AI develops photorealistic 3D capture software for users to capture photos of scenes and objects viewable in 3D on smartphones. Luma AI uses advances in neural rendering and deep learning with the greater availability of compute for photorealistic 3D capture. Luma AI targets a number of markets, including e-commerce, real estate, and 3D games industries. The company also offers Genie, a multimodal text-to-3d model capable of creating any 3d object in under ten seconds with materials, quad mesh retopology, and variable polycount.
Headquartered in San Francisco, Luma AI was founded in 2021 by Amit Jain, Alberto Taiuti, and Alex Yu. Tauiti and Jain were previously Apple employees, and Yu was an AI researcher at UC Berkeley at the time of the company's founding. In October 2021, Luma raised $4.3 million in seed funding from Matrix Partners, South Park Commons, Amplify Partners, RFC’s Andreas Klinger, Context Ventures, and a group of angel investors. In March 2023, the company announced a $20 million Series A round led by Amplify Partners with participation from NVIDIA (NVentures), General Catalyst, and existing investors who participated in the seed round. In January 2024, Luma raised $43 million in a series B round with participation from Andreessen Horowitz, Amplify Partners, Matrix Partners, NVIDIA, South Park Commons, and a group of angel investors. Sources stated the funding values Luma between $200 million and $300 million.
Motion blur negatively impacts the quality of the digital render. Therefore, Luma advises users to move the phone slowly and avoid rapid rotatory movements. For best results, it is recommended that the object or scene be captured from as many unique angles as feasible. Additionally, move the device around rather than rotate it from a stationary position during capturing. Remaining in the same place and capturing outwards in a spherical motion typically does not work well. The guided capture mode helps users achieve sufficient coverage of the object or scene to be rendered.
For guided captures, any object that can be easily viewed from all angles (including the top and bottom) is suitable. For free-form captures, any object is suitable, but more coverage yields superior results; therefore, larger objects may be problematic. To further improve the accuracy of the reconstruction, the entire object must remain framed while it is being scanned, which provides the app with more reflection and object shape data.
The app can have difficulties with handling complex reflections (e.g. curved mirror-like surfaces), curved transparent objects (e.g. plastic water bottles), and very large textureless surfaces (e.g. white walls). The app can capture objects in most lighting conditions as long as textures are not too bright or too dark and remain identifiable. Lighting conditions will be baked in, so the scene should be lit however the user would like it to appear in the final result.
In general, any movement in the scene during capture may degrade the quality of the render. For instance, tree leaves moving in the wind or people moving in the background may result in a loss of detail. The software is not compatible with video stabilization as it causes the frames to have unstable camera intrinsics, which is particularly acute on Android devices. The HDR video option on iOS may also cause artifacts. Luma generally recommends using fixed exposure, although variable exposure can work well in outdoor scenes with varying lighting conditions.
Users have the option of uploading raw images by uploading zips of sequential photos instead of videos through the Luma web interface. Photos are often higher quality than videos and can be preferable if the highest render quality is desired.