Luma AI is a developer of photorealistic 3D image capture software designed to enable users to capture 3D-viewable photos of scenes and objects on smartphones.
Luma AI develops photorealistic 3D capture software for users to capture photos of scenes and objects viewable in 3D on smartphones. Luma AI uses advances in neural rendering and deep learning with the greater availability of compute for photorealistic 3D capture. Luma AI targets a number of markets, including e-commerce, real estate, and 3D games industries. The company also offers Genie, a multimodal text-to-3d model capable of creating any 3d object in under 10ten seconds with materials, quad mesh retopology, and variable polycount.
Headquartered in San Francisco, Luma AI was founded in 2021 by Amit Jain, Alberto Taiuti, and Alex Yu. Tauiti and Jain were previously Apple employees, and Yu was aan AI researcher at UC Berkeley at the time of the company's founding. In October 2021, Luma raised $4.3 million in seed funding from Matrix Partners, South Park Commons, Amplify Partners, RFC’s Andreas Klinger, Context Ventures, and a group of angel investors. In March 2023, the company announced a $20 million Series A round led by Amplify Partners with participation from NVIDIA (NVentures), General Catalyst, and existing investors who participated in the seed round. In January 2024, Luma raised $43 million in a series B round with participation from Andreessen Horowitz, Amplify Partners, Matrix Partners, NIVIDIANVIDIA, South Park Commons, and a group of angel investors. Sources stated the funding values Luma between $200 million and $300 million.
Luma AI develops photorealistic 3D capture software for users to capture photos of scenes and objects viewable in 3D on smartphones. Luma AI uses advances in neural rendering and deep learning with the greater availability of compute for photorealistic 3D capture. Luma AI targets a number of markets including e-commerce, real estate, and 3D games industries. The softwarecompany utilizesalso neuraloffers capture andGenie, a renderingmultimodal systemtext-to-3d tomodel createcapable theof images.creating Itany finds3d applicationsobject in theunder 10 e-commerceseconds with materials, realquad estatemesh retopology, and 3D gamesvariable industriespolycount.
Headquartered in San Francisco, Luma AI was founded in 2021 by Amit Jain, Alberto Taiuti, and Alex Yu. Tauiti and Jain were previously Apple employees and Yu was a AI researcher at UC Berkeley at the time of the company's founding. In October 2021, Luma raised $4.3 million in seed funding from Matrix Partners, South Park Commons, Amplify Partners, RFC’s Andreas Klinger, Context Ventures, and a group of angel investors. In March 2023, the company announced a $20 million Series A round led by Amplify Partners with participation from NVIDIA (NVentures), General Catalyst, and existing investors who participated in the seed round. In January 2024, Luma raised $43 million in a series B round with participation from Andreessen Horowitz, Amplify Partners, Matrix Partners, NIVIDIA, South Park Commons, and a group of angel investors. Sources stated the funding values Luma between $200 million and $300 million.
Motion blur can negatively impactimpacts the quality of the digital render. To counteract thisTherefore, Luma advises users to move the phone slowly and avoid rapid rotatory movements. For best results, it is recommended that the object or scene be captured from as many unique angles as feasible. Additionally, it is better to move the device around rather than rotate it from a stationary position during capturing. Remaining in the same place and capturing outwards in a spherical motion typically does not work well. The guided capture mode helps users achieve sufficient coverage of the object or scene to be rendered.
For guided captures, any object that can be easily viewed from all angles (including the top and bottom) is suitable. For free-form captures, any object is suitable, but more coverage will yieldyields superior results; therefore, larger objects may be problematic. To further improve the accuracy of the reconstruction, the entire object must remain framed while it is being scanned, which provides the app with more reflection and object shape data.
The app maycan have difficulties with handling complex reflections (e.g. curved mirror-like surfaces), curved transparent objects (e.g. plastic water bottles), and very large textureless surfaces (e.g. white walls). The app can capture objects in most lighting conditions as long as textures are not too bright or too dark and remain identifiable. Lighting conditions will be baked in, so the scene should be lit however the user would like it to appear in the final result.
In general, any movement in the scene during capture may degrade the quality of the render. For instance, tree leaves moving in the wind or people moving in the background may result in a loss of detail. The software is not compatible with video stabilization as it causes the frames to have unstable camera intrinsics, which is particularly acute on Android devices. The HDR video option on iOS may also cause artifacts. Luma generally recommends using fixed exposure, although variable exposure can work well in outdoor scenes with varying lighting conditions.
January 9, 2024
January 9, 2024
Luma AI is a developer of photorealistic 3D image capture software designed to enable users to capture 3D-viewable photos of scenes and objects on smartphones.
DeveloperLuma AI is a developer of photorealistic 3D image capture software designed to enable users to capture 3D-viewable photos of scenes and objects on smartphones.
Luma AI develops photorealistic 3D capture software designed to letfor users to capture photos of scenes and objects viewable in 3D on smartphones. The software utilizes neural capture and a rendering system to create the images. It finds applications in the e-commerce, real estate, and 3D games industries.
Motion blur can negatively impact the quality of the digital render. To counteract this, Luma advises users to move the phone slowly and avoid rapid rotatory movements. For best results, it is recommended that the object or scene be captured from as many unique angles as feasible. Additionally, it is better to move the device around rather than rotatingrotate it from a stationary position during capturing. Remaining in the same place and capturing outwards in a spherical motion typically does not work well. The guided capture mode helps users achieve sufficient coverage of the object or scene to be rendered.
For guided captures, any object that can be easily viewed from all angles (including the top and bottom) is suitable. For free-form captures, any object is suitable, but more coverage will yield superior results,; therefore, larger objects may be problematic. To further improve the accuracy of the reconstruction, the entire object must remain framed while it is being scanned, which provides the app with more reflection and object shape data.
The app may have difficulties with handling complex reflections (e.g. curved mirror-like surfaces), curved transparent objects (e.g. plastic water bottles), and very large textureless surfaces (e.g. white walls). The app can capture objects in most lighting conditions as long as textures are not too bright or too dark and remain identifiable. Lighting conditions will be baked in, so the scene should be lit however the user would like it to appear in the final result.
Motion blur can negatively impact the quality of the digital render. To counteract this, Luma advises users to move the phone slowly and avoid rapid rotatory movements. For best results, it is recommended that the object or scene be captured from as many unique angles as feasible. Additionally, it is better to move the device around rather than rotating it from a stationary position during capturing. Remaining in the same place and capturing outwards in a spherical motion typically does not work well. The guided capture mode helps users achieve sufficient coverage of the object or scene to be rendered.
For guided captures, any object that can be easily viewed from all angles (including the top and bottom) is suitable. For free-form captures any object is suitable, but more coverage will yield superior results, therefore larger objects may be problematic. To further improve the accuracy of the reconstruction, the entire object must remain framed while it is being scanned, which provides the app with more reflection and object shape data.
The app may have difficulties with handling complex reflections (e.g. curved mirror-like surfaces) curved transparent objects (e.g. plastic water bottles) and very large textureless surfaces (e.g. white walls). The app can capture objects in most lighting conditions as long as textures are not too bright or too dark and remain identifiable. Lighting conditions will be baked in, so the scene should be lit however the user would like it to appear in the final result.
In general, any movement in the scene during capture may degrade the quality of the render. For instance, tree leaves moving in the wind or people moving in the background may result in loss of detail. The software is not compatible with video stabilization as it causes the frames to have unstable camera intrinsics, which is particularly acute on Android devices. The HDR video option on iOS may also cause artifacts. Luma generally recommends using fixed exposure, although variable exposure can work well in outdoor scenes with varying lighting conditions.
Users have the option of uploading raw images by uploading zips of sequential photos instead of videos through the Luma web interface. Photos are often higher quality than videos and can be preferable if the highest render quality is desired.
March 21, 2023