Viewing entries tagged

Augmented Reality Building Models for Healthcare

Augmented Reality Building Models for Healthcare

The design industry relies on our ability to communicate design intent. Architects must know how to effectively listen in order to understand the needs, goals, and desires of the proposed project. We take this information and use it to develop 2D drawings and renderings to communicate our understanding of their original project requests. The problem is that we are trying to communicate a complex, 3D, conceptual world using two-dimensional tools.

Experimenting with Apple's ARKit

1 Comment

Experimenting with Apple's ARKit


Augmented Reality (AR)  is coming quickly to your pocket. This year’s developer conferences for Google, Facebook, and Apple all had a very heavy focus on AR and VR technology; it is a safe assumption that augmented reality will be the next “mobile.”

At The Apple Worldwide Developers Conference (WWDC) 2017 in June, Apple introduced a new software development library/kit that enables developers to turn iPhones and iPads into augmented reality devices without the burden of custom implementing a Simultaneous Location and Mapping (SLAM) solution. In other words, Pokemon Go is just the beginning. There are approximately 300,000 iOS app developers in the United States market alone, greatly affecting the Apple platform by making it easy to develop with augmented technologies.

Pulse Design Group  is focused on the comprehensive field of immersive computing from Virtual Reality to Augmented Reality and Mixed Reality. We started building with the toolkit immediately to investigate potential use cases, and to get a better understanding of feasibility and timelines. There are great possibilities for Augmented Reality on iOS devices, and remarkable use cases for training simulation and architectural applications. The following is an example of our early proof of concepts and provides a brief rundown of the process.


In this gif you can see how the iPad is able to move freely around the room while presenting content that appears to persist in the virtual space. Apple accomplishes this through what they call “Visual-Inertial-Odometry,” which is the blending/cross referencing input from the camera and the internal motion sensors in the mobile device.

The current hardware and software requirement is any iPhone or iPad with an A9 processor or better, and iOS 11. We installed the update and got started! We used the Unity development engine in this project due to its current support for ARKit. Below are the techie details for how we implemented one of our models into an ARKit project.

A current version of Unity needed to be installed to begin development for ARKit. The link to download can be found here.

After installing of Unity create a new project, then download the plugin for ARKit. 


Now the fun begins!

Initially, we took one of our existing architectural models of an operating room and imported the .fbx file from Autodesk 3DS Max into one of the example ARKit scene samples that were provided in the above link to the asset store, which worked immediately! One difficult thing to account for is how ARKit will read the real world space, therefore it is necessary to package and export your project to the device and run the software on compatible hardware. It is hard to get an exact sense of what is happening in the app from the Unity viewport/simulator alone. The ARKit plugin for Unity performs most of its magic in relation to the camera viewport, which doesn’t exactly come across in the in editor viewport.

At this point, we built and ran the program on the device and found that our model was inhabiting a surface plane that was detected and properly used by ARKit! However, this was not enough and we quickly realized that we needed to implement some sort of simple control scheme to navigate the model. We concluded that the best method was the tried and true “pinch to zoom” and “tap to place” control scheme. To do this, we implemented a C# library called “LeanTouch” which allowed us to set some of this behavior in Unity. One issue we ran into was separating a “one finger tap” from a “two finger pinch.” This was an issue because users typically do not remove both pinching fingers from the screen at the exact same time, leaving a brief moment where one finger was down, registering with Unity the “one finger tap” command, and thereby moving the model to the last registered tap. We are currently getting around this with a three finger requirement for tap to place, so it will not interfere with the pinch to scale implementation.

Special note time: Pulse Design Group has a branded RGB shade of green that is used for all promotional marketing and other visual elements. By simply finding and changing the RGB value of the particle emission system, we were effectively able to customize the look of the application to fit our company’s branding.

An ARKit demo that features a flyover map application.

An ARKit demo that features a flyover map application.

The result is an impressive demo to show and introduce ARKit to our clients. Additionally, we are able to showcase our work in a 1:1 scale, or in a smaller scale dollhouse view, as long as we are using an iOS device!

At Pulse Design Group, we will continue to improve the ARKit demo, add functionality and interaction, investigate the best possible use-case for the software, and build internal concepts. ARKit is very promising and appears to make AR development much easier than it was just three months ago.  

To learn more or further discuss AR technology, contact Andrew London at or Steve Biegun at


1 Comment

Photogrammetry: About As Real As It Gets

I am standing in the auditorium of an old abandoned German hospital. I can see dust across the room as it falls through a beam of light coming through a window. Above me, I can see the rafters of the abandoned architecture as if they were actually thirty feet above me. Scuff marks, upended tiles, and piles dirt across the floor indicate that the building has been uninhabited for several decades. An old abandoned piano is situated in the center of the room. What I see around me is unequivocally real. Well, as real as photographic reality capture can get. The program created by hosts recreations of several different environments from across the globe.


Simply put, "photogrammetry" is the science of making measurements from photographs. Over the past few years, photogrammetry has become a popular method for recreating real objects, locations, and people as 3D models. Recognizing that the lines between real and virtually real blur more and more everyday, we sought out to improve our photogrammetry capabilities to allow us to take real elements into less-than-real environments. Realistically captured elements could be used in architecture to show complicated designs in virtual reality or to preserve past creations as you would with a photograph.

Before discussing how we went through this process, let's talk about drawbacks. The photo capture process can take a significant amount of time. Some photos may need to be eliminated, retouched, or entirely removed from the series of photos in order for the final photographic recreation to be realistic. Additionally, the 3D mesh that is constructed by the process can have significant errors, artifacts, and glitches. Some objects are easier to capture than others; reflective surfaces, people, and objects shot with moving backgrounds are notoriously difficult to recreate. That being said, the software used to stitch these objects together is rapidly improving.

Our goal was to use photogrammetry to recreate my self in VR as a 3d object so that I could be used as a stand-in character for realistic simulations. Rather than using mannequins, cut-out character silhouettes, or stylistic video game-y assets, we thought that using realistically captured 3D figures would have a greater impact.

Don't worry, it gets weirder.



In this article by Tested, the crew assembles a series of flat lights that provide clean, even, and balanced lighting on the model's face. For reality capture, you want your original 3D model to be lit as flat as possible. Lights, shadows, and reflections are all calculated in the VR simulation program (Unreal Engine, Unity, Cryengine, etc.)

Avoid reflections, bright lights, and moving elements. Any sort of movement in the scene can cause issues with the photo calculation. Basically, avoid everything that we did in our first attempt:

In our first attempt, Callum captured close to 200 pictures from varying angles around the room. The most difficult part of this attempt was getting me to keep my arms parallel to the floor for the duration of the shoot - a much more difficult feat than I had imagined. I did not initially understand how difficult it can be to remain perfectly still for several minutes. Optimally, the person being captured should not blink, talk, or turn their head. Breathing is acceptable though not encouraged. The model that was stitched together in Autodesk Remake had serious calculation issues caused by movement in our setting, so we gave it a second attempt. 


In our second attempt, we propped my hands with tripods to ensure that they would be perfectly parallel to the floor. After the capture, we simply removed the tripods from the 3D file. The 3D model had a significantly higher level of quality and resolution than our first attempt.

After exporting to an FBX file format to 3DS Max, we processed and rigged the 3D model for VR. The file is extremely high-poly (solid 3D models are composed of meshes of polygons. This had hundreds of thousands of polygons.) For optimization, we could manually reduce quality from parts of the image. In a scene with multiple photogrammetry captured characters, it would be essential to reduce the polygon resolution of as many models as possible. Additionally, I was able to process and touch-up the created photographic textures in Photoshop to ensure that lighting and shadows were even - especially across the front of the 3D model.

In the future, photogrammetry will become easier as the means for positional calculation become more advanced. If we were to have captured our images with higher resolution cameras and with uncompressed file formats, we would have had less problems with compression and color depth. Perhaps new phones with multiple camera sensors will bring photogrammetry to the masses with easy-to-use tools for development and distribution.



Augmented Reality: More than Pokemon


You may have heard of a new app taking the smart phone gaming world by storm. Since Pokemon Go’s release, Nintendo’s valuation increased by over $9 billion. It quickly surpassed Twitter and Instagram in daily active users. On top of that, local businesses have capitalized on the growing trend by offering deals and discounts to users who play the game.


Everyone seems to be hopping on board the Pokemon Go train. What is it about this new game that is so engaging that it can have such a tremendous impact? If I had to venture a guess, it doesn't have anything to do with Nintendo's trademark yellow rodent.

At its core, "Pokemon Go" is an augmented reality smartphone game. The same technology that powers this wildly successful game has the capability to become an integral tool for architecture and construction over the next several years.


Augmented reality (AR) has become the hot new buzzword in the tech industry. Simply put, AR is a form of technology that superimposes computer-generated data on top of what a user would typically see in the real world. If AR turns out to be even half as exciting as I anticipate, AR could revolutionize every industry from architecture and construction to retail and education.

Saying that Pokemon Go is an AR application is certainly not incorrect; the image that is displayed on the user’s smart phone is, in fact, computer generated and it is, in fact, overlaid on what the smart phone camera sees. However, I would argue that the whole idea of augmented reality is far more broad than what we have seen from this game. I was recently a guest on KCUR 89.3 to talk about AR, Pokemon Go, and how this new type of technology will change the world.

When we talk about AR as it can be used for practical (not gaming) applications, we are typically referring to hardware rather than software. Products like the Microsoft Hololens, Magic Leap, and the Meta headset are all exciting uses of AR technology that we will hear more about over the next few years. Companies like Apple, Google, and Samsung have been investing billions into new AR technology in anticipation of this new wave. AR has burst onto the tech scene and it is certainly here to stay.

In this metaphor, the whale is AR. Also, the whale is from an actual AR demonstration from the forthcoming Magic Leap AR system.

In this metaphor, the whale is AR. Also, the whale is from an actual AR demonstration from the forthcoming Magic Leap AR system.

Microsoft Hololens

hololens profile

The Microsoft Hololens is a head-mounted AR headset that overlays holograms onto what the user sees. It uses cameras to scan the room the user’s area so that it can overlay elements with careful precision. The image in the headset is projected onto a reflective transparent panel in front of the user’s eyes. This enables the user to see through the image while also seeing the overlaid information.

In my opinion, the Hololens is a fantastic use of technology. The headset sits at a lofty $3,000 and the field of view is extremely small (imagine holding a deck of cards at arm’s length. That is roughly how large the overlay image is in the Hololens), but there is a massive amount of potential packed into this first generation headset. With the Hololens, a user can manipulate holograms and visual elements in their AR display by making gestures with their hands. NASA currently uses the Microsoft Hololens to communicate and direct the astronauts on the International Space Station.

Meta 2

The Meta 2 headset is another head-mounted AR system that is new to the scene. Boasting a 2560 x 1440 high-dpi display and a 90-degree field of view, the headset seems like a fantastic alternative to the Hololens.

While the Hololens is an untethered device that allows the wearer to roam around their room with near-perfect tracking, the Meta headset remains tethered to a PC. This allows for the Meta to have (arguably) greater flexibility with processing power, battery usage, display resolution, and cost. In my opinion, the future of AR won't involve cables.

Magic Leap

Magic Leap Seahorses

There is very little public information about Magic Leap. After several rounds of investments, Magic Leap is currently valued at $4.5 billion. They are currently partnered with Lucasfilm’s ILMxLAB and they have received investments from giants such as Disney, Google, J.P. Morgan, and China’s e-commerce powerhouse Alibaba.

We can expect to hear more (or rather anything) about Magic Leap in the next year. Magic Leap is rumored to publicly unveil their new device at CES in early 2017. Expect a blog post either about how exciting the new hardware seems to be or about how disillusioned I was about Magic Leap.

Magic Leap Mountain

What we know is that Magic Leap is creating an AR system that seems to overlay images with better quality than what is seen with the Microsoft Hololens or the Meta 2 headset. Some tech enthusiasts speculate that the Magic Leap system might literally project light onto the user’s eyes. It might also be as simple as a pair of glasses that the user wears in a non-obtrusive manner. Whatever the case may be, the visual quality seems to be miles beyond anything that we have seen from previous augmented reality headsets. We can expect to hear more about what exactly Magic Leap is and when it will be released over the next year. Surely, I won’t shut up about it any time soon.

Augmented reality will be the next big wave of technology after virtual reality. Over the next 5 years, we can reasonably expect to see a surge in investment and anticipation for new wearable computers. Honestly, I wouldn't be surprised if we replace computer monitors in the future with VR/AR systems that do a better job of providing real-time information. Before long, we may view "Pokemon Go" as an extremely primitive use of AR from a more simple time. Either that, or they will be projected directly onto our eyeballs - a concept that I find as equally intriguing as I do terrifying.