The automated creation of reality-based 3D models from acquired data, and their subsequent interactive exploration, has been at the core of visual computing research for decades. The field of reality-based capture, modeling, and exploration is now thriving, as result of converging scientific, technological, and market trends. In particular, novel data-driven solutions can profit from the availability and proliferation of affordable high-fidelity visual/3D sensors, scalable GPU-accelerated cloud infrastructures, and personal graphics and VR platforms. In this context, the sub-field of indoor capture and exploration is now emerging as a well-defined separate research and application area. Rather than on producing and exploring accurate and dense 3D models that faithfully replicate even the smallest geometry and appearance details, indoor solutions focus on abstracting simplified high-level structured models that optimize certain application-dependent characteristics and contain some level of semantic information [Ikehata 2015 ICCV, Pintore 2020 CGF]. The derivation of such models from acquired data has fundamental motivations. First of all, due to the peculiar characteristics of man-made interior spaces, applying reconstruction methods that optimize for completeness, resolution and accuracy, leads to unsatisfactory results [Xiao 2014 IJCV]. Second, structured models with semantic information are required by a variety of applications, from the generation or revision of building information models (BIM) to the creation of systems for guidance, energy management, security, evacuation planning, location awareness or routing [Ikehata 2015 ICCV]. Finally, high-level editing operations are facilitated by underlying structures [Pintore 2020 CGF]. While the area of structured indoor reconstruction has witnessed substantial progress in the past decade, current solutions are mostly limited to relatively simple environments with constrained shapes and provide very limited opportunities for seamlessly going from captured data to a dynamically modifiable representation [Pintore 2020 CGF]. In this project, we aim to substantially advance this research field by proposing specialized solutions for rapidly capturing, exploring, and visually editing indoor environment starting from panoramic images. For indoor applications, 360-degree capture, be it purely visual or mixed, has become a standard in indoor acquisition, allowing fast, efficient and cost-effective capture from single static poses [Yang 2020 TOG, Matterport]. At the same-time panoramic coverage naturally leads to the implementation of immersive solutions. Since rooms are full of clutter, bounding surfaces are typically untextured, and single or few-image capture leads to partial coverage and imperfect sampling, deriving a structured and visually editable model are difficult and ambiguous tasks, that require the exploitation of priors. In this project, we aim at advancing the state-of-the-art by creating novel technologies and knowledge around the application of modern Artificial Intelligence and Visual Computing technologies towards the creation of Digital Twins for Indoor Applications from panoramic images. We plan, in particular, to make the following contributions: 1. Creating novel data-driven solutions for augmenting panoramic images of indoor environments with geometric and semantic information. The goal will be, starting from panoramic images, and eventually coarse depth, to create a high-resolution multi-layered representation segmenting the image into major semantic components and enhancing it with geometric information. 2. Creating novel data-driven solutions for extracting full 3D layouts of indoor structures from panoramic images. The goal will be to significantly extend current state-of-the-art solutions, covering very constrained environments, to generate full 3D models from a 360-degree capture of complex rooms. 3. Creating novel methods for associating an editable visual representation to structured models deriving from panoramic images and for interactively transfer semantic and visual information between different indoor scenes. We will, in particular, devise solutions targeting the panoramic representation of (1) and/or the 3D models of (2) to manipulate visual representations in general contexts,. Moreover, building on the joint color, geometry and semantic information representation, we will devise multi-dimensional feature space that we will exploit for designing task-based similarity metrics and for performing indoor environments classification, clustering and ordering that can be used for selecting styles and features to be transferred between the various interior scenes. 4. Creating novel interactive and immersive solutions for exploring and editing indoor representations. We will, in particular, develop novel user-interfaces, including web-based and immersive ones, to efficiently perform the exploration and editing tasks using our new editable representation and the multi-dimensional feature space. These solutions will be based on the general framework of guidance. The project will tackle fundamental problems in the field of reality-based indoor reconstruction, modeling, exploration, leading to new methods, algorithms, and reference implementations. While the developed approaches will aim to be of general use, we will use motivating applications stemming from the industrial context. The researched topics promise to be particularly relevant for AEC industry in Qatar. The GHD company, that is co-funding the project, is particularly interested in advancing the state of research to envision novel automatic solutions geared towards the generation of Digital Twins for supporting and speeding up the design, planning and management processes related to construction development and real-estate management and presentation. To this end, they will provide the necessary know-how for testing the developed technologies