AR Shared Worlds 

with Sebastian King & Alberto Vega Rivera (2023)


At the end of our first semester together, Sebastian, Alberto, and I decided to work together on a project. We weren’t sure what we wanted to do, but we knew we wanted to tackle our final studio project as a group instead of alone.


To find what we might work on, we started to mind map our common interests. Our studio had primarily been about emerging technologies, and we wanted to understand either AR or VR development.

We decided to focus on understanding how Unity relates to the different AR frameworks, and their capabilities. 

We started outlining the different features of ARCore and AR Foundation.



It wasn’t immediately clear how these features were connected, so we drew a second diagram to try to understand how Unity connected to the AR SDKs we found.


We created a rough schedule to align our conceptual phase with the full project trajectory.


Themes and game strategies we wanted to explore
Game spaces can be defined with different AR surfaces (though this wasn’t something we would be able to do with the Tilt Five). We wanted to combine ideas of spatial AR representation, territory, and perspectives. Would the space be static, or would it function like a simulation, similar to builder-type games? We came up with an idea about unique perspectives being shown through different AR glasses, as well as potentially being determined by position around the play space. Each player would perceive the world differently.

Theo Triantafyllidis

We identified kit bashing (assembling models from pre-existing parts) and exquisite corpse as primary precedents. We imagined our users having their own kit of 3D models that they could choose from to populate their space.


Balance: Ideas / time / technical requirements 
What kind of prototype can we build? What do we need to learn or consider to build it? Following the MOSCOW method: What is essential, and what could we do without or fake?


Quest: We hit a wall when we realized Meta doesn’t allow camera feeds for development. We were unable to track images via passthrough.
Tilt 5: The technical side was fine, but we would need to install some proprietary software, and we didn’t think the tech was what we needed for this idea.


We decided to pivot and look into mobile development for our AR experience. We would use AR Foundation and make an iOS build. We also created a new schedule with a set of tasks we wanted to complete, including:
  • Figma diagrams
  • Hand-drawing tiles
  • Hand-drawing 3D assets in Quill
  • Debugging Unity... :’(

user journey diagram 


Things we did in order of testing in Unity for functionality:
  1. Could we get an AR project onto the phone that could access the camera?
  2. Could we get an AR project onto the phone with camera access and something in the space?
  3. Could we get image tracking to work with a single image? How big do the images have to be? How much of the screen do they have to take up before an object gets placed? What happens if we have multiple of the same image, will there be multiple objects placed (no, it just picks the first one it sees)? 
  4. Could we get image tracking to work with multiple images, but just one model? What images does the app have trouble with and which ones does it easily recognize? 
  5. Could we place different items on different images? Could we get the items to stay in the space if the image isn’t being tracked (yes, surprisingly easily too)? 
  6. Could we map lists of objects to images, and could we get the current object to change by tapping on it? We ended up switching to the new input system for this.

In order to build to an iPhone, we needed to:
  1. Switch platforms in our Unity project build settings and make a build on a Mac.
  2. Open the generated project in Xcode.
  3. Put an iPhone into developer mode.
  4. Register our apple account as a developer account (the free version).
  5. Build the project to the connected phone via Xcode.



Our initial test helped us identify which types of imagery the image processing algorithms could detect. Features with high contrast helped the process detect the image faster.

initial test imagery


Our final images were high-contrast, with obvious shapes that the image detection algorithms could easily pick up. We chose to make them "organic" in theme to convey descriptive environments.


This was a successful group project, and we finished knowing we had created something delightful, full of possibilities. We stopped at the alpha stage; however, this project can be further developed with more tiles, Quill animations, and enhanced shaders.