On Monday December 5th, Amazon announced that in early 2017 they will be opening their new concept store, Amazon Go (No Lines. No Checkout. No, Seriously). As the catchline describes, the new grocery store will require no waiting for a check out to purchase all your items. Simply walk in, grab what you like and walk out, all items will be charged online to your Amazon account – almost seems too good to be true right?
What is Amazon Go?
Amazon have been working behind the scenes for 4 years to create their idea of a futuristic supermarket. No fuss, no lines, no self-checkout automated voice shouting “Unexpected item in bagging area” until your ears bleed. The store will rely on the latest technology to identify what items shoppers pick up in store and place in their bag, to then charge them the appropriate amount online through their Amazon account. Classic grocery items such as bread, milk etc. will be stocked along with freshly prepared meals and meal kits made by staff in-store.
Amazon announced that their first flagship store will be opening in Seattle, hometown of the online retail giant, with plans to rollout a further 2000 stores within the next decade. The first store is currently undergoing beta testing, opening only to Amazon employees at present time. Amazon released a launch video for the store, unveiling their latest venture and describing the technology behind it all, “Just Walk Out Technology”. But how does this mysterious black box ACTUALLY work?
How does it work?
Disclaimer: Most of what is written below is my amateur opinion on how a store like this can work mixed with facts that Amazon have released around the technology.
While the idea of a store like this is a simple concept, in reality the technology needed to support something like this needs to be cutting-edge. Interestingly, Amazon have been very reluctant in sharing information around how their aptly named “Just Walk Out Technology” works, instead offering us a series of buzzwords such as “Computer Vision”, “Deep Learning Algorithms” and “Sensor Fusion” (much like the technology used in self-driving cars). This barely begins to explain how Amazon Go can work and so here begins my attempt at deconstructing their technology. I would split the overall technology into three big features:
- Customer Identification: In order to enter the Amazon Go store, a customer must download the Amazon Go App and use the QR code to scan and enter the shop. This will identify the shopper as they walk in. I believe this serves multiple purposes, not only to let the cameras detect the features of the customer and follow them around the store, but also to use the history of previously purchased items as a predictor of what they will buy that day. For example, if a customer was in the milk aisle and picks up a bottle of milk, it would be hard for a camera to distinguish between skimmed and semi-skimmed. However, the purchase history for that customer shows they have only ever bought semi-skimmed so a safe assumption can be made that they are once again buying semi-skimmed milk.
- Item Identification: The success of Amazon Go relies almost entirely on all the systems in store being able to detect extremely accurately what items people are picking up. The rise of computation power has allowed deep learning algorithms to train very deep architectures, meaning that computer vision has taken a leap in subfields such as item detection and action recognition. This way, Amazon can not only follow a customer around the store but also detect any actions, such as them picking up an item. I believe sensor fusion also comes in to play here, where possibly a scale will be able to detect the change of weight to help identify an item, or carefully placed sensors could tell the stock of a specific item has reduced by one.
- Final Verification: When exiting the store, customers will walk through a set of gates which will perform the final verification of your basket. This can be easily achieved using RFID tags, which give out a small signal so that sensors can detect the items in your basket. This adds an extra layer of accuracy when understanding what customers have bought before charging them online.
The Big Picture
Undoubtedly, if Amazon pull this store off with some degree of success, it will cause a technological disruption in the food retail industry with both positive and negative consequences. Firstly, the technology could be easily distributed by Amazon and implemented across different chains of supermarkets worldwide – ultimately I believe Amazon’s hope is not to establish themselves as offline food distributors, but to use these stores as proof of concepts to then penetrate the market with their new tech. Furthermore, the sheer amount of cameras and data being collected in these stores will create some unbelievable insight into shopping behaviours, for example being able to see which brands are picked up and put back the most, what people spend the most time looking at etc. This could have a massive impact in personalizing the shopping experience further.
Of course, such cutting-edge technology always comes with drawbacks. If this technology is distributed to larger retailers, it would have to be 100% accurate. A 1% accuracy loss in item detection could cost retailers millions simply due to the massive scales at which these companies operate. Can Amazon guarantee such accuracy? Moreover, we are once again entering the ethical debate of AI vs. jobs, in this case checkout assistants. Should we allow such technology to exist? Can alternative jobs be found for checkout assistants? I heard an interesting argument from my Programme Director recently that a potential solution to this specific problem of AI is that companies that profit from savings generated by AI should keep employees under their payroll and let them carry out work for social progress. Sadly, Jeff Bezos does not strike me as a person that would care for such a solution, but with the accelerating pace of AI development, discussions like these will become unavoidable for tech giants.