Viewing entries tagged
architecture

Augmented Reality Building Models for Healthcare

Augmented Reality Building Models for Healthcare

The design industry relies on our ability to communicate design intent. Architects must know how to effectively listen in order to understand the needs, goals, and desires of the proposed project. We take this information and use it to develop 2D drawings and renderings to communicate our understanding of their original project requests. The problem is that we are trying to communicate a complex, 3D, conceptual world using two-dimensional tools.

Cardinal Health: Improving Sales with Virtual Reality

Comment

Cardinal Health: Improving Sales with Virtual Reality

Cardinal Health worked hand-in-hand with Pulse Design Group to launch a new Virtual Reality customer experience for Cardinal Health’s Inventory Management Sales Solution. This exciting new tool is specifically designed to increase sales, shorten the sales cycle, and further position Cardinal Health as an innovative leader in the healthcare industry.

Comment

Experimenting with Apple's ARKit

1 Comment

Experimenting with Apple's ARKit

ARKitTA-692719332.jpg

Augmented Reality (AR)  is coming quickly to your pocket. This year’s developer conferences for Google, Facebook, and Apple all had a very heavy focus on AR and VR technology; it is a safe assumption that augmented reality will be the next “mobile.”

At The Apple Worldwide Developers Conference (WWDC) 2017 in June, Apple introduced a new software development library/kit that enables developers to turn iPhones and iPads into augmented reality devices without the burden of custom implementing a Simultaneous Location and Mapping (SLAM) solution. In other words, Pokemon Go is just the beginning. There are approximately 300,000 iOS app developers in the United States market alone, greatly affecting the Apple platform by making it easy to develop with augmented technologies.

Pulse Design Group  is focused on the comprehensive field of immersive computing from Virtual Reality to Augmented Reality and Mixed Reality. We started building with the toolkit immediately to investigate potential use cases, and to get a better understanding of feasibility and timelines. There are great possibilities for Augmented Reality on iOS devices, and remarkable use cases for training simulation and architectural applications. The following is an example of our early proof of concepts and provides a brief rundown of the process.

SteveSquat.gif

In this gif you can see how the iPad is able to move freely around the room while presenting content that appears to persist in the virtual space. Apple accomplishes this through what they call “Visual-Inertial-Odometry,” which is the blending/cross referencing input from the camera and the internal motion sensors in the mobile device.

The current hardware and software requirement is any iPhone or iPad with an A9 processor or better, and iOS 11. We installed the update and got started! We used the Unity development engine in this project due to its current support for ARKit. Below are the techie details for how we implemented one of our models into an ARKit project.

A current version of Unity needed to be installed to begin development for ARKit. The link to download can be found here.

After installing of Unity create a new project, then download the plugin for ARKit. 

 

Now the fun begins!

Initially, we took one of our existing architectural models of an operating room and imported the .fbx file from Autodesk 3DS Max into one of the example ARKit scene samples that were provided in the above link to the asset store, which worked immediately! One difficult thing to account for is how ARKit will read the real world space, therefore it is necessary to package and export your project to the device and run the software on compatible hardware. It is hard to get an exact sense of what is happening in the app from the Unity viewport/simulator alone. The ARKit plugin for Unity performs most of its magic in relation to the camera viewport, which doesn’t exactly come across in the in editor viewport.

At this point, we built and ran the program on the device and found that our model was inhabiting a surface plane that was detected and properly used by ARKit! However, this was not enough and we quickly realized that we needed to implement some sort of simple control scheme to navigate the model. We concluded that the best method was the tried and true “pinch to zoom” and “tap to place” control scheme. To do this, we implemented a C# library called “LeanTouch” which allowed us to set some of this behavior in Unity. One issue we ran into was separating a “one finger tap” from a “two finger pinch.” This was an issue because users typically do not remove both pinching fingers from the screen at the exact same time, leaving a brief moment where one finger was down, registering with Unity the “one finger tap” command, and thereby moving the model to the last registered tap. We are currently getting around this with a three finger requirement for tap to place, so it will not interfere with the pinch to scale implementation.

Special note time: Pulse Design Group has a branded RGB shade of green that is used for all promotional marketing and other visual elements. By simply finding and changing the RGB value of the particle emission system, we were effectively able to customize the look of the application to fit our company’s branding.

An ARKit demo that features a flyover map application.

An ARKit demo that features a flyover map application.

The result is an impressive demo to show and introduce ARKit to our clients. Additionally, we are able to showcase our work in a 1:1 scale, or in a smaller scale dollhouse view, as long as we are using an iOS device!

At Pulse Design Group, we will continue to improve the ARKit demo, add functionality and interaction, investigate the best possible use-case for the software, and build internal concepts. ARKit is very promising and appears to make AR development much easier than it was just three months ago.  


To learn more or further discuss AR technology, contact Andrew London at alondon@pulsedesigngroup.com or Steve Biegun at sbiegun@pulsedesigngroup.com

 

1 Comment

Mixed Reality

1 Comment

Mixed Reality

It's difficult to really understand what virtual reality is like without putting a headset on your face. Mixed reality video, a combination of physical and virtual image capture, is probably the best means to explain what a VR experience is like without the use of a headset.

Let's talk about what that means and how we're utilizing exciting new technology like the Vive Tracker to improve communication with our audience.

1 Comment

Comment

Medical Equipment Planning

Medical equipment planning is the strategic placement of equipment in buildings that require equipment; from hand sanitizer dispensers, to MRI machines. Planning and coordination of medical equipment can be one of the biggest challenges of healthcare design.

In this post, we discuss some of the ways that we utilize medical equipment planning to benefit our clients and to coordinate with teams to properly equip medical facilities.

Comment

Photogrammetry: About As Real As It Gets

I am standing in the auditorium of an old abandoned German hospital. I can see dust across the room as it falls through a beam of light coming through a window. Above me, I can see the rafters of the abandoned architecture as if they were actually thirty feet above me. Scuff marks, upended tiles, and piles dirt across the floor indicate that the building has been uninhabited for several decades. An old abandoned piano is situated in the center of the room. What I see around me is unequivocally real. Well, as real as photographic reality capture can get. The program created by Realities.io hosts recreations of several different environments from across the globe.

realities_piano

Simply put, "photogrammetry" is the science of making measurements from photographs. Over the past few years, photogrammetry has become a popular method for recreating real objects, locations, and people as 3D models. Recognizing that the lines between real and virtually real blur more and more everyday, we sought out to improve our photogrammetry capabilities to allow us to take real elements into less-than-real environments. Realistically captured elements could be used in architecture to show complicated designs in virtual reality or to preserve past creations as you would with a photograph.


Before discussing how we went through this process, let's talk about drawbacks. The photo capture process can take a significant amount of time. Some photos may need to be eliminated, retouched, or entirely removed from the series of photos in order for the final photographic recreation to be realistic. Additionally, the 3D mesh that is constructed by the process can have significant errors, artifacts, and glitches. Some objects are easier to capture than others; reflective surfaces, people, and objects shot with moving backgrounds are notoriously difficult to recreate. That being said, the software used to stitch these objects together is rapidly improving.

Our goal was to use photogrammetry to recreate my self in VR as a 3d object so that I could be used as a stand-in character for realistic simulations. Rather than using mannequins, cut-out character silhouettes, or stylistic video game-y assets, we thought that using realistically captured 3D figures would have a greater impact.

Don't worry, it gets weirder.

Source: Tested.com

Source: Tested.com

In this article by Tested, the crew assembles a series of flat lights that provide clean, even, and balanced lighting on the model's face. For reality capture, you want your original 3D model to be lit as flat as possible. Lights, shadows, and reflections are all calculated in the VR simulation program (Unreal Engine, Unity, Cryengine, etc.)

Avoid reflections, bright lights, and moving elements. Any sort of movement in the scene can cause issues with the photo calculation. Basically, avoid everything that we did in our first attempt:

In our first attempt, Callum captured close to 200 pictures from varying angles around the room. The most difficult part of this attempt was getting me to keep my arms parallel to the floor for the duration of the shoot - a much more difficult feat than I had imagined. I did not initially understand how difficult it can be to remain perfectly still for several minutes. Optimally, the person being captured should not blink, talk, or turn their head. Breathing is acceptable though not encouraged. The model that was stitched together in Autodesk Remake had serious calculation issues caused by movement in our setting, so we gave it a second attempt. 


photogrammetry_pulse_design_group_polys

In our second attempt, we propped my hands with tripods to ensure that they would be perfectly parallel to the floor. After the capture, we simply removed the tripods from the 3D file. The 3D model had a significantly higher level of quality and resolution than our first attempt.

After exporting to an FBX file format to 3DS Max, we processed and rigged the 3D model for VR. The file is extremely high-poly (solid 3D models are composed of meshes of polygons. This had hundreds of thousands of polygons.) For optimization, we could manually reduce quality from parts of the image. In a scene with multiple photogrammetry captured characters, it would be essential to reduce the polygon resolution of as many models as possible. Additionally, I was able to process and touch-up the created photographic textures in Photoshop to ensure that lighting and shadows were even - especially across the front of the 3D model.

In the future, photogrammetry will become easier as the means for positional calculation become more advanced. If we were to have captured our images with higher resolution cameras and with uncompressed file formats, we would have had less problems with compression and color depth. Perhaps new phones with multiple camera sensors will bring photogrammetry to the masses with easy-to-use tools for development and distribution.

photogrammetry_scene