Cortex2
Flag_of_Europe.svg

Funded by the European Union

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement N° 101070192

From Concept to Impact

SENSO3D is a cutting-edge research and innovation project developed by Sensomatt under the CORTEX2 Open Call Track 1, part of the Horizon Europe Programme. The project aims to simplify and scale the creation of immersive virtual environments by using AI technologies to generate 3D content from 2D inputs.
At the start, the project was envisioned as an AI-powered 3D object library focused on household appliances and components. However, due to practical constraints in object acquisition and evolving use-case opportunities, the project pivoted early in its lifecycle toward office and business environments, with a strong focus on:

  • Virtual conference rooms
  • Lobbies for AR/VR interaction
  • Functional scenes for remote collaboration and training

This pivot has enabled the creation of highly useful, industry-relevant environments that better align with real-world needs and use cases in education, enterprise, and design.

From Concept

Technical Innovation at the Core

SENSO3D delivers a powerful combination of AI and immersive technology integration to enable fast, automated, and scalable 3D content creation. Key innovations include:

AI-Powered 2D

AI-Powered 2D-to-3D Reconstruction

A deep learning pipeline that:

  • Detects objects in 2D images
  • Reconstructs them as accurate 3D models
  • Generates textures based on object descriptions

This allows users to transform simple images into realistic digital assets ready for AR/VR applications.

Prompt-Based Scene Generation Tool

An AI-powered tool that allows users to describe a room in natural language—e.g., “A conference room with a round table, 8 chairs, and a whiteboard”—and receive a fully assembled virtual scene in return.
This feature greatly enhances accessibility, making immersive scene creation possible for non-technical users.

Unity & WebXR Integration

All models and scenes are optimized for Unity and exported for WebXR, allowing deployment across:

  • VR headsets (e.g., Meta Quest, HTC Vive)
  • Web browsers
  • Mobile and desktop platforms

A streamlined import pipeline, metadata tagging, and prefab generation ensure developers and designers can quickly build interactive experiences.

Project Timeline

SENSO3D runs for 9 months (July 2024 – March 2025), structured into three focused sprints:
Sprint 3 (January–March 2025)
  • Finalization of AI pipelines
  • Prompt-based generation tool launched
  • Final object repository completed (1000+ models)
  • Real-world testing and performance optimization
  • Integration with CORTEX2 ecosystem
Sprint 1 (July–October 2024)
  • Foundation development
  • Initial AI tools for object detection
  • First Unity scene prototypes
  • Shift from household to conference-focused objects
Sprint 2 (November–December 2024)
  • Expansion of 3D object library
  • Integration of AI object classification and reconstruction
  • Development of virtual lobby and conference room environments
  • AI-based Unity import automation

Each phase delivered a formal deliverable report (D1, D2, D3) outlining technical progress, achievements, and future steps.

Expected Outcomes

By the end of March 2025, SENSO3D will have delivered:

A Structured 3D Object Library

  • 1000+ categorized models (furniture, AV equipment, decor, architectural elements)

  • Optimized for Unity and AR/VR deployment
  • Fully searchable and metadata-tagged

Tools for AR/VR Content Creators & SMEs

  • AI reconstruction tools for creating 3D models from 2D inputs

  • Prompt-based generators to instantly create XR scenes from text

  • Unity/WebXR-ready assets for plug-and-play immersive environments

SENSO3D opens up new possibilities for virtual training, remote collaboration, interior design, and digital twin creation, especially for SMEs who lack large 3D production teams.

Expected Outcomes