Augmented Reality


INTRODUCTION

Augment Reality is the art of superimposing computer generated content over the live view of
the world. It is the integration of digital information with live video or user’s environment in
real time.
Augmented reality apps build upon the world around us by displaying information overlays and
digital content tied to physical objects and locations. For example, with games like Pokemon
Go , you can capture and train digital creatures in the real world. You can also bring a static print
ad to life, watch a movie trailer by pointing your smartphone camera at a poster, or discover
nearby establishments and landmarks with your mobile device.


OVERVIEW

what augmented reality actually is?

It’s adding a layer of digital information on top of physical world around us. One of the smartest way we are using this augmented reality is through smartphone technology. Smartphones have a lot of features that makes augmented reality more attractive, effective
and easier to implement.

So, how it works?

Augmented Reality works with the use of Smartphone features such as Camera,
Compass, Hardware sensors. With the use of this, smartphones figures out where your are in relation to the world around you. It figures out the orientation, environment where you are now, your present location, direction on which the device is pointing through. So once you know where you are, you can start overlaying information on top of the real world and this is how Augmented reality stuffs are made through.




HISTORY

     In the year 1966 , Prof. Ivan Sutherland invented head mounted display which was the first step in making AR a possibility. 

     In the year 1990, Prof. Tom Caudell coined the name “Augmented Reality” and developed the complex software at Boeing to help technicians assemble cables into the aircraft.

     Until 1999, Augmented Reality remained a very much toy of the scientists since it was expensive, complex and bulky. In 1999, Hirokazu Kato of Nara Institute of Science and Technology released the ARToolkit to the open Source Community.

     After the sudden increase of public interest in smart phones, mobilizy was among the pioneers to develop the apps using Augmented Reality concepts with the help of mobile sensors and camera. Once ARToolkit was ported to Adobe flash, the journey reaches where we are with the Augmented Reality technology.

TYPES OF AUGMENTED REALITY

  • GPS AND COMPASS TECHNOLOGY
  • MARKER BASED TRACKING
  • MARKER LESS TRACKING
  • PROJECTION BASED AR
  • RECOGNITION BASED AR
  • LOCATION BASED AR
  • OUTLINING ARMAR
  • SUPERIMPOSITION BASED AR


GPS AND COMPASS TECHNOLOGY

Found in smartphones and tablets. Uses device GPS, compass and high speed wireless networks. Provides useful local web content information and added services in 2D location of the geo location of the user. Fairly imprecise due to the current inaccuracy of the GPS location.

MARKER BASED TRACKING

Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.

MARKERLESS TRACKING

This is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

PROJECTION BASED AR

Just like anything else which is beyond our reach, projection based AR feels more attractive (at least as of now) compared to an AR app you can install on your phone. As is obvious by its name, projection based AR functions using projection onto objects. What makes it interesting is the wide array of possibilities.

One of the simplest is projection of light on a surface. Speaking of lights, surfaces and AR, did you ever think those lines on your fingers (which divide each finger into three parts) can create 12 buttons? Have a look at the image and you would quickly grasp what we’re talking about. The picture depicts one of the simplest uses of projection based AR where light is fired onto a surface and the interaction is done by touching the projected surface with hand. The detection of where the user has touched the surface is done by differentiating between an expected (or known) projection image and the projection altered by interference of user’s hand.

One of the widespread uses of projection based AR techniques is noninteractive. Projection on objects can be used to create deception about the position, orientation and depth of an object. In such a case an object is taken into consideration and its structure is studied in depth. The object’s distance from the projection is calculated and the projection light sequence is then designed carefully to deceive the viewer’s mind.

RECOGNITION BASED AR

Recognition based AR focuses on recognition of objects and then provide us more information about the object. e.g. when using your mobile phone to scan a barcode or QR code, you actually use object recognition technology. Fact is, except location based AR systems, all other types do use some type of recognition system to detect the type of object over which augmentation has to be done.

Recognition based AR technology has varied uses as well. One of them is to detect the object in front of the camera and provide information about the object on screen. This is something similar to the AR apps for travellers (location browsers). However, the difference lies in the fact that the AR location browsers usually do not know about the objects that they see while recognition based AR apps do.

A second type of recognitionbased AR application is to recognize the object and replace it with something else. The applications, once again are in abundance and possibilities endless. Some examples are given below:

* Simulation of objects in 3D. In this case, printed version of a recognizable object (such as a card with QR code printed on it, or a picture provided by the app printed on paper) is shown to the camera. This printed version is called “Augmented Reality Marker” and acts as a reference for the AR app running on the system. The augmentation app detects and recognizes the marker and tries to understand the distance and orientation of the print.

Once the recognition is complete, it replaces the marker on screen with a 3D version of the corresponding object. This allows the user to investigate the object in more detail and from various angles. Rotating the marker would rotate the 3D imagery as well.

* Yet another famous use of recognition AR tech is translation of words on the fly. In this case, the app reads the words seen by the camera and tries to recognize the words using OCR (Optical Character Recognition) technology and then replaces the words on screen with their translated versions. This can be immensely useful for tourists when travelling to places where the locally prevalent language is not known.

* Recognition based AR can also be used in education. Markers of more than two objects are kept together. The app detects the multiple markers and tries to simulate relationships among them. For example, one can use printed cards to represent atoms (in say chemistry class) and based on their mutual distance the AR app can show how a reaction would take place; and that would be just one use of AR in education. Detection of drawings and sketches by more intelligent apps can help teach small children. e.g. a picture of a giraffe be replaced with a living 3D version of a giraffe and children could see how it looks in reality and they could interact with the same on a touchscreen!

* Recognition of printed versions of 3D objects can help create 3D simulations of those objects without having to actually build a physical model. This can be of great aid to people who constantly work with 3D applications such as architects and animators. We will talk about them in later chapters.

* Recognition based AR can be used in projectors to automatically detect a projectable surface and project on only the projectable area. The projection can be made interactive by using dynamic objects in the surroundings to command the projector. This can eventually be used with the projection mapping technique to autodetect various types of objects and send out projection imagery according to the size, distance and colour of the surface on which projection could be done. 

With projection based AR, your imagination is the only limit. There is a lot of research going on in this field and with time, more and more applications would pour in. If you are really excited about how you can create something like that of your own, we have tips for you at the end of
this book. For now, let us see the location based AR.

LOCATION BASED AR

It would be an injustice not to mention this category when talking about AR. Location based augmented reality is one of most widely implemented applications of AR. The strongest force behind this is the easy availability of smartphones and the features that they provide in terms of location detection. Location based AR is mostly used to help travellers in their journey.

Location based AR in most cases is used for AR location browsers which help users discover interesting places within their current location. This method works by detecting the user’s location and orientation by reading data from the mobile’s GPS, digital compass and accelerometer and predicting where the user is looking; then adding related information on screen about the objects that can be seen from the camera.

Outlining AR

Though the human eye is known to be the best camera in the world, there are limitations. We cannot look at things for too long. We cannot see well in low light conditions and sure as anything, your eye cannot see in infrared. For such cases, special cameras were built. Augmented reality apps which perform outlining use such cameras. Once again, object recognition sits behind all that outlining AR can do. Let us begin with a lifesaving implementation example.

When driving a car on a road in foggy weather, the boundaries of the road may not be very visible to the human eye, leading to mishaps. Advanced cameras tuned specially to see the surroundings in low light conditions can be used to outline the road boundaries within which the car should stay. Such a system would prove very useful in avoiding accidents. With extra sensors capable of detecting objects around (e.g. by using ultrasound) the overall risk of hitting
some living object can be minimized as well.The technology can help you save pedestrian lives as well. Outlining people crossing the road on a HUD (Heads Up Display) windscreen can be more useful than having a separate infrared video feed.

SUPERIMPOSITION BASED AR

Superimposition based AR provides an ‘alternate’ view of the object in concern, either by replacing the entire view with an augmented view of the object or by replacing a portion of the object view with an augmented view. In this case, once again, object recognition plays a vital role logically, if the application does not know what it is looking at, it most certainly cannot replace the original view with an augmented one.

Depending on what type of view is required, the technology can be used for multiple purposes.

* Doctors can use the technology to examine the patient from various angles in realtime. A live feed from an XRay machine can be used to superimpose the XRay view of the patient‘s body part on the real image to provide better understanding of the damage to bones. The application can be made to work via a head mounted display or special goggles. In other uses, the view can be shown on a screen where the video feed is taken from a real camera and XRay
vision can be imposed on it.

* In military applications, superimposition based AR can provide multiple views of a target object without showing extra information in text and blocking the vision of soldier from other important objects around. If you have been shooting enemies via your computer mouse, you’d already know how it would appear. Superimposition of infrared view or radioactive view of an object or an area can help save lives; or win wars! 

* Superimposition of ancient pictures over real ones can provide interesting views of historical places. Broken monuments can come back to life in all their original glory. Perhaps different eras complete with landscapes can be relived
with AR.

* To allow a tiger or snake near you might be a horrifying experience with hazardous consequences, except when superimposition AR is used to bring them to you. Placing a person in a location or situation which is otherwise dangerous can be can be safely accomplished via superimposition AR! 

* Superimposing a real object with its internal view can be helpful in education as well, for instance, to study bone structure. Though we have touched some of the most important types of augmented reality, there are a few others which cannot be easily classified to fall in one of the above said ones.

APPLICATIONS


  •  Marketing and Advertisement
  •  Medical
  •  Education
  •  Entertainment, toys and games
  •  Military
  •  Navigation
  •  Product Launches
  •  Presentations

CURRENT USES OF AUGMENTED REALITY


  •  Real world Gaming applications
  •  DIY Car repair
  •  Sales and Marketing
  •  Home constructions and designs
  •  In real world games such as Cricket, footballs.

FUTURE OF AUGMENTED REALITY


  •  Better visualization of products by scanning barcodes, QR codes or by pure object recognition.
  •  Learning tools in schools
  •  City Planning and Architecture
  •  Political campaigns
  •  Gaming Industry
  •  Manufacturing
  •  Interior designing.

CONCLUSION

Thus, Augmented Reality is an evolving technology with tons of cool features and in future
it’s going to be a revolution in gaming and designing environment.


Android :
Formats supported : PNG, SVG, 9-Patch images (Recommended)



Name
    Density
Pixel
Usage
ldpi (0.75x)
120 dpi
36 x 36 px
Low density screen
mdpi (baseline)
160 dpi
48 x 48 px
Medium density screen
hdpi (1.5x)
240 dpi
72 x 72 px
High density screen
xhdpi (2x)
320 dpi
96 x 96 px
Extra-high density screen
xxhdpi (3x)
480 dpi
144 x 144 px
Extra-extra-high density screen
xxxhdpi (4x)
640 dpi
192 x 192 px
This is for the the launcher icon only*
playstore-icon.png
/
512 x 512 px
Google Play store


iOS :
Formats supported : PNG

Name
Size(px)
Usage
Icon-App-29x29@1x.png
29x29
An iPhone or iPad settings icon (non-retina) for iOS 7 or later
Icon-App-29x29@2x.png
58x58
iPhone Settings, iPad Settings for Retina display
for iPhone 6s, iPhone 6, iPhone SE, iPad Pro, iPad and iPad mini (@2x)
Icon-App-29x29@3x.png
87x87
iPhone Settings icon for Retina display
for iPhone 6s Plus and iPhone 6 Plus (@3x)
Icon-App-40x40@1x.png
40x40
An iPhone or iPad Spotlight search results icon (non-retina) on iOS 7 or later.
Icon-App-40x40@2x.png
80x80
iPhone Spotlight results, iPad Spotlight results for retina display
for iPhone 6s, iPhone 6, iPhone SE, iPad Pro, iPad and iPad mini (@2x)
Icon-App-40x40@3x.png
120x120
iPhone Spotlight results for retina display
for iPhone 6s Plus and iPhone 6 Plus (@3x)
Icon-App-60x60@1x.png
60x60
The main iPhone app icon for iOS 7 or later.
Icon-App-60x60@2x.png
120x120
iPhone App Icon
for iPhone 6s, iPhone 6 and iPhone SE (@2x)
Icon-App-60x60@3x.png
180x180
iPhone App Icon
for Retina displayfor iPhone 6s Plus and iPhone 6 Plus (@3x)
Icon-App-76x76@1x.png
76x76
iPad App Icon
Icon-App-76x76@2x.png
152x152
iPad App Icon for Retina display
for iPad and iPad mini (@2x)
Icon-App-76x76@3x.png
228x228
Icon-App-83.5x83.5@2x.png
167x167
iPad Pro App Icon for Retina display (@2x)


Android :
Formats supported : PNG, JPEG, JPG, SVG, 9-Patch images (Recommended)


Screen Portrait Landscape
LDPI 200x320px 320x200px
MDPI 320x480px 480x320px
HDPI 480x800px 800x480px
XHDPI 720px1280px 1280x720px
XXHDPI 960px1600px 1600x960px
XXXHDPI 1280px1920px 1920x1280px



iOS :
Formats supported : PNG

Tablet (iPad)
Non-Retina (1x)
Portrait: 768x1024px
Landscape: 1024x768px

Retina (2x)
Portrait: 1536x2048px
Landscape: 2048x1536px

Handheld (iPhone, iPod)
Non-Retina (1x)
Portrait: 320x480px
Landscape: 480x320px

Retina (2x)
Portrait: 640x960px
Landscape: 960x640px

iPhone 5 Retina (2x)
Portrait: 640x1136px
Landscape: 1136x640px

iPhone 6 (2x)
Portrait: 750x1334px
Landscape: 1334x750px

iPhone 6 Plus (3x)
Portrait: 1242x2208px
Landscape: 2208x1242px

Overview
Android Material Design gives new visual language that synthesizes classic principles of good design with the innovation and possibility of technology and science.
  • Rational space and a system of motion.
  • Print based design(typography,grids,space,scale,color and use of imagery).
  • Motion is meaningful.
Material Environment:

Android Material Design is gives 3D world which means all objects have x,y,and z dimensions. It contains Shadow and Light effects to all object .


Material Properties:

  1. Physical Properties – it can vary x & y dimensions and not Z dimension (it should be uniform thickness).
  2. Transform Properties – it can grow and shrinks only along its plane.(never bends or folds).
  3. Movement Properties -it moves along any axis with its plane .

Material Elevation and Shadow:
  1. Elevation – it is relative depth between two surfaces objects along the z-axis
  2. Shadow – it gives objects depth and directional movement . It indicates the amount of separation between surfaces.
  3. Object relationship – Object can move independently of each other .Parent-Child Relationship . The child in each of these relationship refers to an element that is a subordinate to its Parent element.

Elevation (dp)
Component
24
Dialog
Picker
16
NavDrawer
RightDrawer
ModalBottomSheet
12
Floating action button
9
Sub-menu(+1dp for each sub menu)
8
Menu
Card
Raised Button
6
Floating action button(Resting elevation)
Snackbar
4
AppBar
3
Refresh indicator
Quick entry/Search bar(Scrolled state)
2
Card(Resting)
Raised button(Resting)
Quick entry /Search bar(Resting)
1
Switch

Consistent choreography:

Secondary color:
Grid:
Parent to Child: Exploring deeper levels, or screens, of an app is a hierarchical journey that starts at a parent screen. From there, a user can explore multiple possible sub- screens, which are children to the parent screen.

Animation:

Responsive Interaction:

  • Touch, voice, mouse, and keyboard are all equally important input methods.
  • UI elements appear tangible, even though they are behind a layer of glass (the device screen). To bridge that gap, visual and motion cues acknowledge input immediately and animate in ways that look and feel like direct manipulation.
Responsive interaction elevates an application from an information-delivery service to an experience that communicates using multiple visual and tactile responses.

1.Surface Reaction:
Instant visual confirmation at the point of contact :under the pad of a finger for touch, at the microphone for voice, or in the appropriate field for a keyboard press.

2.Material Response:
Like surface reactions, material can lift up when touched, indicating an active state. On touch, the user can generate new or transform existing material, or directly manipulate sheets of material by dragging or flinging them. Material can be resized linearly or radially.

3.Radial Action:
Add clarity to user input through visual reactions to user input. Radial action is the visual ripple of ink spreading outward from the point of input.

Transitions

Visual Continuity :
Transitioning between two visual states should be clear, smooth, and effortless. A well-designed transition tells the user where to focus their attention.

Hierarchical Timing:
When building a transition, consider the order and timing of element movement. Ensure that motion supports the information hierarchy, conveying what content is most important by creating a path for the eye to follow.

The paths elements travel along should make sense and be orderly and in coordinate manner.


Style:
Color palette:
Google suggests using the 500 colors as the primary colors in your app and the other colors as accents colors in Android Material Design.
Primary color:
When using a primary color in your palette, this color should be the most widely used across all screens and components.
Palettes with a secondary color may use this color to indicate a related action or information. The secondary color may be a darker or lighter variation of the primary color.
Accent color:
The accent should be used for the floating action button and interactive elements, such as:
Text fields and cursors,Text selection,Progress bars,Selection controls, buttons, and sliders,Links.
Layouts:
Metrics & key lines:
Baseline grids:
All components align to an 8dp square baseline grid for mobile, tablet, and desktop. Iconography in toolbars align to a 4dp square baseline grid.  
Key-lines & Spacing:
key lines, spacing guidance, and sample screens for elements on mobile, tablet, and desktop.
Statusbar:24dp
Title:80dp
Subtitle: 48dp
List Item:72dp
Toolbar: 56dp
Account menu and list items: 48dp
Space between content areas: 8dp
Navigation right margin: 56dp
Screen edge left and right margins: 24dp
Content left margin from screen edge: 80dp
Card left and right padding: 24dp
Card content left padding: 104dp
Status bar and space above list: 24dp
Space between content areas: 8dp
Screen edge left and right padding: 24dp
Icons’ vertical center distance from screen edge: 52dp
Nav item left padding from screen edge: 104dp
Content left margin from screen edge: 80dp
Card left and right padding: 32dp
Card nav item left padding: 96dp
Subtitle, list item, and slider: 48dp

Material designs responsive UI is based on a 12-column grid layout. This grid creates visual consistency between layouts, while allowing flexibility across a wide variety of designs. The number of grid columns varies based on the breakpoint system.


Components:
Bottom Sheets
A bottom sheet is a sheet of material that slides up from the bottom edge of the screen. A bottom sheet can be a temporary modal surface or a persistent structural element of an app.

1.Modal bottom sheets slide in over an app’s content.
2.Persistent bottom sheets are an integral part of an app’s layout.
Font and color
  • Text: Roboto Regular 16sp,  #000 87%
  • Title (optional): Roboto Regular 16sp, #000 54%
  • Default bottom sheet background fill: #FFF
  • Transparent overlay fill: #000 20%
Button:
A button clearly communicates what action will occur when the user touches it. It consists of text, an image, or both, designed in accordance with your app’s color theme.

There are three standard types of buttons:
  • Floating action button: A circular material button that lifts and displays an ink reaction on press.
  • Raised button: A typically rectangular material button that lifts and displays ink reactions on press.
  • Flat button: A button made of ink that displays ink reactions on press but does not lift.
Card: A card is a sheet of material that serves as an entry point to more detailed information. A card could contain a photo, text, and a link about a single subject.
Card collections only scroll vertically.
Cards can be constructed using blocks of content which include: 1. An optional header 2. A primary title 3. Rich media 4. Supporting text 5. Actions
Metrics & Key-line in Cards :
Primary title top padding: 24dp Primary title bottom padding: 16dp Action button row padding: 8dp Supporting text top padding: 16dp Supporting text bottom padding: 24dp Supporting text: 14sp
Elevation Card resting elevation: 2dp Card raised elevation: 8dp
Chip: Complex entities in small block .it may contain photo,short title and brief information. it may also contain icon.
Snackbars: Snackbar appears on the bottom of the screen for showing a brief information.it can contain an action and only one snackbar on screen at a time.
Steppers: Steppers covey progress through numbered steps.
Tabs: Switch between different views or functional aspects of an app. Tabs control the display of content in a consistent location.
Text fields: User to input text,select text and lookup data .
Tooltips: Tooltips are labels that appear on hover and focus when the user hovers over an element with the cursor, focuses on an element using a keyboard (usually through the tab key), or upon touch (without releasing) in a touch UI.
Patterns:
Fingerprint:
Fingerprint detection can be used to unlock a device, sign in to apps, and authenticate purchases with Google Play and some third-party apps.
Fingerprint is not as secure as a strong PIN or password.

Permissions:
Permission requests should be simple, transparent, and understandable. When requesting access, apps should ensure that either the feature itself or an explanation provided makes it clear why a permission is needed.
Runtime Permissions:
Apps may request permission to access information or use device capabilities at any time after installation. When a user needs to perform an action in an app, such as using the device camera, the app may request permission at that moment.
Users may also allow or deny the permissions of any app from Android Settings anytime after installation.
Denied Permissions:
Provide feedback whenever a permission is denied.
Navigational Transitions: Movements between states in an app such as from a high-level view to a detailed view. Most, but not all, transitions are hierarchical in nature.

Sibling to Sibling: Sibling transitions are transitions that occur between elements at the same level of hierarchy


INTRODUCTION

The term iBeacon and Beacon are often used interchangeably.
iBeacon is the name for Apple’s technology standard, which allows Mobile Apps (running on both iOS and Android devices) to listen for signals from beacons in the physical world and react accordingly. In essence, iBeacon technology allows Mobile Apps to understand their positionon a micro-local scale, and deliver hyper-contextual content to users based on location. The underlying communication technology is Bluetooth Low Energy.

BLUETOOTH LOW ENERGY

Bluetooth Low Energy is a wireless personal area network technology used for transmitting data over short distances. As the name implies, it’s designed for low energy consumption and cost, while maintaining a communication range similar to that of its predecessor, Classic Bluetooth.
  1. Power Consumption: Bluetooth LE, as the name hints, has low energy requirements. It can last up to 3 years on a single coin cell battery.
  2. Lower Cost: BLE is 60-80% cheaper than traditional Bluetooth.
  3. Application: BLE is ideal for simple applications requiring small periodic transfers of data. Classic Bluetooth is preferred for more complex applications requiring consistent communication and more data throughput.

BLE Communication Process
  1. BLE communication consists primarily of “Advertisements”, or small packets of data, broadcast at a regular interval by Beacons or other BLE enabled devices via radio waves.
  2. BLE Advertising is a one-way communication method. Beacons that want to be “discovered” can broadcast, or “Advertise” self-contained packets of data in set intervals. These packets are meant to be collected by devices like smartphones, where they can be used for a variety of smartphone applications to trigger things like push messages, app actions, and prompts.
  3. Apple’s iBeacon standard calls for an optimal broadcast interval of 100 ms. Broadcasting more frequently uses more battery life but allows for quicker discovery by smartphones and other listening devices.
  4. Standard BLE has a broadcast range of up to 100 meters, which make Beacons ideal for indoor location tracking and awareness.

iBeacon use BLE communication

With iBeacon,Apple has standardized the format for BLE Advertising, Under this format, an advertising packet consists of four main piece of information.
  1. UUID:This is a 16 byte string used to differentiate a large group of related beacons. For example, if Coca-Cola maintained a network of beacons in a chain of grocery stores, all Coca-Cola beacons would share the same UUID. This allows Coca-Cola’s dedicated smartphone app to know which beacon advertisements come from Coca-Cola-owned beacons.
  2. Major:This is a 2 byte string used to distinguish a smaller subset of beacons within the larger group. For example, if Coca-Cola had four beacons in a particular grocery store, all four would have the same Major. This allows Coca-Cola to know exactly which store its customer is in.
  3. Minor:This is a 2 byte string meant to identify individual beacons. Keeping with the Coca-Cola example, a beacon at the front of the store would have its own unique Minor. This allows Coca-Cola’s dedicated app to know exactly where the customer is in the store.
  4. Tx Power:This is used to determine proximity (distance) from the beacon. How does this work? TX power is defined as the strength of the signal exactly 1 meter from the device. This has to be calibrated and hardcoded in advance. Devices can then use this as a baseline to give a rough distance estimate.

Why is iBeacon a Big Deal?


  1. With an iBeacon network, any brand, retailer, app, or platform will be able to understand exactly where a customer is in the brick and mortar environment. This provides an opportunity to send customers highly contextual, hyper-local, meaningful messages and advertisements on their smartphones.
  2. The typical scenario looks like this. A consumer carrying a smartphone walks into a store. Apps installed on a consumer’s smartphone listen for iBeacons. When an app hears an iBeacon, it communicates the relevant data (UUID, Major, Minor, Tx) to its server, which then triggers an action. This could be something as simple as a push message [“Welcome to Target! Check out Doritos on Aisle 3!”], and could include other things like targeted advertisements, special offers, and helpful reminders [“You’re out of Milk!”]. Other potential applications include mobile payments and shopper analytics and implementation outside of retail, at airports, concert venues, theme parks, and more. The potential is limitless.
  3. This technology should bring about a paradigm shift in the way brands communicate with consumers.iBeacon provides a digital extension into the physical world. We’re excited to see where iBeacon technology goes in the next few year.