Technology and Innovation

AI and Human Employment

AI and Human Employment

We are hopeful that while AI will take over some jobs, but humans will adapt to switching to doing other things that cannot be automated as we have done so in the past (e.g. industrial revolution in the 18th century, the computer revolution in the late 20th century). This is unlikely to be true for the following reasons.

In the past, automation meant a step up in terms of machine capability, but the difference in capabilities between machines and humans was still very wide. Humans could keep ahead of machines by doing the things that machines could not, especially around operating machines and knowledge work. However AI is catching up to human intelligence in vertical areas. It is now possible to automate jobs in accounting, legal and other similar areas of knowledge work, putting all these workers are at risk. These transitions are accelerating, we humans do not have time to adapt to the changes. Within a single generation, we have started to automate knowledge work, and this is just the beginning.

With the combination of Robotics and AI, tasks that require specialized labour are susceptible to automation. This has already started to happen both inside homes (e.g. washers/dryers/dish washers, Roomba) and factories (automated manufacturing or cars, using robots in warehouses). Even medical professionals are not immune. It is not an stretch to imagine robot surgeries in the future. In fact robots are routinely used to assist in surgeries today (e.g. during minimally invasive procedures).

Ultimately economics will drive this trend exploiting cost advantages. When a robot is cheaper than a human, a robot will be deployed. A Robot does not need downtime and does not have be paid. How can humans compete?

Who knows me better than I know myself?

Who knows us better than ourselves? No seriously, who does? While the answer to ‘who' may not be surprising, the extent to which may be astonishing. 

We are aware of the vast amount of information is being collected by each our devices. While some of this happens without our explicit knowledge, most of the data is from text and images that we explicitly provide. Within each mobile application, there are various SDKs or javascript snippets that automatically collect and transmit data back to their servers. Desktop browsers and browser-plugins have the means to track and manipulate the content on any page that is browsed. When banking online, the browser and plugins have access to personal financial data including account numbers and balances. They can track online shopping and spend behavior, medical problems, sexual behavior, religious and spiritual beliefs. 

Accessing data through a web browser is inherently less secure than using an application because the content delivery and its consumption are controlled by two different entities with different objectives. A browser may allow plugins to be installed which can track and manipulate content; the website providing the content has little or no control over this. It is now possible to identify a user across devices through cross device tracking techniques; it is now possible to correlate a person's browsing history across different desktop and mobile devices. While I explicitly sign onto my Google account from multiple devices, there are other companies who are able to determine this through other means. For example, if I browse for an Amazon product on my phone, an advertisement for the same product is displayed on my other devices.

In addition to having information collected about us, we voluntarily provide significant amounts of information about ourselves mostly through uploading photos, email and messaging and storing documents online. 

Here is a photo taken in the early 1980’s, which was later scanned and uploaded to Google Photos. 

Google determined this picture was taken at NASA Johnson Space Center in Houston, Texas. This was determined without GPS information embedded in the photo. Google has created a visual representation of the entire world from all the uploaded photos, which has been folded into Google Maps and Google Street View. Every location, building, point of interest has been tagged, analyzed and memorialized. Using this Google can accurately locate where a picture was taken even without GPS coordinates. Additionally, image recognition algorithms have become so good, that Google is now able to identify people across different time periods. The three people in the foreground were accurately identified, thirty years from when the picture was taken. 

Here are some of the implications of these technologies, extrapolating from where we are today.

  1. Using image recognition, Google and Facebook are able identify people from their photos. From uploaded photos, they can create a list of real world connections between people even if others in the photos are not tagged. 
  2. Photos provide a wealth of other information including
    1. Locations where we live and the places we visit.
    2. Personal tastes and preferences including
      1. What we like to wear,
      2. What we like to eat and where,
      3. What we drive,
      4. What we watch, play, listen to as in sports, games, music and films.
    3. Determine a person's health by
      1. Using visual cues to estimate a persons weight over time,
      2. Tracking distance moved (walked, run, cycled),
      3. Tracking sleeping and waking times,
      4. Tracking number of visits to the doctor,
      5. Tracking heartbeat using a fitness tracker if available.
    4. Track relationships between people by analyzing the sentiment from the photos they upload.
  3. From capturing the mouse movements on the screen, determine if a user is right or left handed.
  4. Through the browser track personal financial status, including how much and where each person spends their money, what their bank balances are.
  5. Most personal correspondence has moved online through email and messaging applications. These products provide insights into our deepest and most intimate thoughts, emotions and sentiments.

These companies have a near 360 degree picture of who we are, what we wear and eat, where we travel, the state of our health, our spending patterns and our thoughts and feelings. They are constantly developing new algorithms and techniques to learn more about us with the current data sets. With advances in AI and ML, it is possible to correlate all this information to create deeper analysis. While currently this is used to serve more relevant advertisements, it could have other uses in the near future. For example, an AI assistant can to make recommendations and predictions based on personal knowledge. This AI assistant could recommend a family friendly car upon the arrival of a new child (if there is not on already) or suggest that a person who has been steadily gaining weight and missing work, to go see a doctor. 

However, this information could have more sinister uses. Imagine what could happen if knowledge of a person's physical address coupled with the their current location were to fall into wrong hands. Google, Facebook and Uber may know this directly while others like Amazon, UPS/FedEx could infer this from shopping or delivery patterns. Through Facebook or Instagram, the whole world may know when a person or family is away from their homes for an extended period.

Is the cat out of the bag?

Individually there are certain things we can do to minimize exposure like using native applications (e.g. mobile banking app and not the browser), using browsers in incognito mode and disabling browser plugins.

Legislation has to be strengthened so that ownership of data rests squarely in the hands of the consumers. Data collection should be separated from its usage, i.e. Permissions have to be individually and explicitly granted for collecting and using data. For example, it should be possible for a consumer to allow access to collect location information and use it for routing, but not for advertising. If this does not happen voluntarily, this has to be legislated and audited to verify compliance. Sharing of data between businesses should follow the same guidelines. 

We do not want to rewind to a time before the mobile internet and give up the conveyances that it provides. But while trading privacy for convenience the tradeoff should rest squarely with the consumer.

Compute is ephemeral while data has gravity

The shift from compute to data centric computing is driven by a confluence of two trends; The first is an increase in the data collected and the second is using this data to unpack additional value in the supply chain. 

The explosion in data collection is driven by the increase in the number of computing devices. Historically, this has increased by orders of magnitude with each generation, starting with mainframes, to PC’s and mobile devices. While there were only a handful of mainframe computers and a PC for every few people, mobile devices are ubiquitous; two thirds of adult human beings on the planet possess one. The growth of IOT devices will follow the same exponential trend set by ancestor devices, there will be many IOT devices per person. But unlike ancestor devices, IOT devices will be specialized and mostly autonomous. 

Autonomous edge computing

Traditionally in computing, the value of data increased when it was shared. Excel spreadsheets became more useful when shared with with co-workers, photos and videos when shared with family and friends. However specialized devices collect data about their immediate environment, which may not be useful to another device in a different environment. For example, autonomous cars collect 10GB data for every mile; it neither necessary nor possible to transfer all this data over the internet and back for real time decision making. As data about a car's current environment changes in real time, the data from the past is no longer relevant, and does not need to be stored. Additionally, this raw data is not useful to another car at a different location. Enabled by higher bandwidth at lower latencies, edge computing facilitates faster extraction of value from large amounts of data.

The inability to transfer large amounts of data over the internet will drive collaborative machine learning models like federated learning. Under this model, data collection and processing agents will run at the edge and transfer a summary of their learnings to the cloud. The cloud is responsible for merging (averaging) the learnings and distributing it back to the edges. In the case of autonomous cars, the learnings from each autonomous vehicle will be shared with the cloud. The cloud merges the learnings and redistributes it to all the other autonomous vehicles. 

This trend has already started at Google, where their engineers are working on federated collaborative machine learning without centralizing training data. Apple released a Basic Neural Network Subroutine (BNNS) library enabling neural nets to run on the client. Currently BNNS does not train the neural net, but this is a next logical step. Specialized computers and systems will be built that are data centric, i.e. with the ability to move large amounts of data for processing at very high rates. One of the first examples is Google’s Tensor Processing Unit (or TPU) which outperforms standard processors by an order of magnitude. In the near future every mobile device will have a SOC that is capable of running a fairly complex neural network. Data and applications that consume this data will be collocated, creating autonomous edge computing systems. 

The gravity of data

As the cost of compute has been going down, the big three cloud vendors (AWS, Azure and Google) provide more services around data. Larger amounts of data will need more compute and higher bandwidth at lower latencies to extract value. It is easier to bring compute to the data than the other way around.

These vendors now provide AI and machine learning as a service to extract value from this data. In the near future, it will be possible to automatically analyze and transform the data to provide actionable insights. Think of the raw data as database tables and transformed data as indexes which co-exist with the tables. The vendors will automate data transformation and analysis, locking the data in and making it non portable. Organizations should ensure that the process of value extraction is not dependent on a vendor’s proprietary technology, and the transformed data stays portable.

So in summary

  1. We have shifted from being compute to data centric.
  2. Large, temporal data will drive autonomous edge computing and federated machine learning. 
  3. Enterprises should not use proprietary technology for extracting value from their data.



Economic and Social Impact of Demonetization in India

The prime-minister Mr Modi told the nation ’no more cash’,
he made all the money disappear, in a flash.
They had looted, they had plundered,
they had torn the country asunder,
and all the corrupt politicians ended up in the trash.

On the 9th of November 2016, Mr. Narendra Modi, the Prime Minister of India announced a series of measures combating corruption and black money in India. In one swift move, Mr. Modi removed the two largest bills in circulation amounting resulting in decrease of 86% or INR 13.62 trillion (USD 197 billion) of the total value of INR 15.84 trillion (USD 230 billion) .

Black money broadly means unaccounted or undisclosed income and assets not reported for tax purposes, without reference to its origin (whether legal or illegal), and 'black economy' to denote the sum total of incomes/assets as well as activities that are not accounted for. While in developed countries in North America and Europe black money is generated primarily through illegal activities,  in the developing countries in Asia and Africa, generation of black money is from all conceivable sources - corruption and siphoning of public resources, trade-based black money due to non- reporting of incomes or profits and inflation of expenses, in addition to a host of criminal activities. 

The rationale behind demonetization

  1. Financial transactions for transfer of assets are conducted partly in cash outside the purview of the legal system. For example, when buying a property for INR 1M, the transaction might be registered for INR 500,000 on which taxes are paid while the other INR 500,000 paid in cash, is outside the purview of the legal system.
  2. Local businesses generate black money by suppressing receipts and inflating expenditure. Businesses inflate expenses through fake or inflated invoices. Goods and services are invoiced multiple times. Corporations practice transfer mispricing by under invoicing their exports or over invoicing their imports.
  3. This wealth is used to subvert democracy by paying cash to voters to elect a certain political party to power. Once in power, the party recoups their investment at the expense of economic development. The two major national parties claim to have incomes of merely Rs 500 crore and Rs 200 crore. But this isn't "even a fraction" of their expenses, when they spend between INR 100 billion and INR 150 billion annually on election expenses. 
  4. The bureaucracy is complicit in corruption including bribery and theft by those holding public office - such as by grant of business, bribes to alter land use or to regularize unauthorized construction, leakages from government social spending programmes, speed money to circumvent or fast-track procedures, black marketing of price controlled services. Reducing the amount of cash in circulation, limits moneys that can be paid as a bribes.
  5. To eliminate counterfeit currency
  6. Limit benami (proxy) transactions. Transactions are made by proxies on behalf of another person by merely lending their name while control vests with the person who actually makes the purchase and is the beneficial owner.

Impact on society

All this comes at a cost of decline in GDP estimated at two percentage points for up to two quarters due to reduction in consumption brought upon by a shortage of cash. This affects the lower classes disproportionately, these are the ones without savings and other reserves to fall back on. While the opposition has been moderate so far, the longer the hardship continues, the lesser the people will tolerate the suffering. 

These steps have been taken in the move towards cashless economy. A cashless economy with an audit trail  and increased transparency will directly result in reduction of corruption, elimination black markets, accountability  and better governance.  A cashless society will enable sharing India's economic growth with the poor by ensuring that benefits from governmental programs flow to the recipients directly bypassing the bureaucrats and eliminating leakage. There is likely to be an increase in government spending programs over the next couple of years before the next general elections in Q2, 2019.  

The number of Indians with bank accounts has surged to 1.17 billion with 347 million savings bank accounts added since 2013. The next big challenge is to get the rural people to actively use their bank accounts 40% of which are dormant. The government might use the moneys collected by the demonetization to inject cash directly into the accounts of the poor. An injection of INR 10,000 into 500M accounts would result in an expenditure of INR 5 trillion which is still below the amount demonetized.

Impact on investments and capital

Mobile payments like PayTM and online commerce site like Flipkart and Amazon will be big beneficiaries of this move. Expect to see additional investment in the Fintech and E-Commerce. Wealth will be hoarded in non-monetary assets like gold and decentralized virtual currencies like bitcoins which exist outside the purview of governments. 

This is one of the defining moments in India’s economic development. This will be looked upon as the time when the country broke free of the shackles of corruption holding it back and boldly went where none had gone before. 

Amazon Echo and the Rise of the Talking Droids

After using the Amazon Echo for the past few months it is impossible to ignore the advancement towards ambient intelligence,  where computers are listening, sensing all the time and respond to the presence of humans. The Echo inexorably moves us closer towards interfacing with computers, like we do with other humans. But with Echo’s current limited capabilities, there are also significant challenges to overcome before this becomes reality.


The Echo keeps me company in the kitchen when I cook and eat. In addition to playing music, I use it to order from Amazon, listen to audio books, set kitchen timers and add items to my grocery list.

In the short time with Echo, I have been able come to see the enormous potential, that Echo is more than a voice-enabled device for buying Amazon products. It is a computing platform which can be extended through ‘skills’. Skills are voice commands that enhance functionality like summoning an Uber or ordering pizza. These skills are are built by third party application developers, like apps on the mobile devices.

Improved Sensing

As the Echo evolves with better natural language processing and sensory capabilities, it will provide exponentially more services. It will be able to see, hear and detect motion either itself or by interfacing with external sensors. It will be able distinguish me from my family members through voice recognition. If a camera was present, it would recognize me visually and identify my gestures to draw additional context around my speech.

An omni-directional motion sensor would enable the lights to be turned on or off automatically when I enter or leave the room. The camera, microphone and motion sensors could be used to enhance in-home security, generating notifications when detecting a stranger.

Voice Commerce

Enabling the Echo to use voice recognition along with visual identification to securely identify me will enabling voice commerce. I can securely bank, pay bills, transfer money to family and friends, order food, book tickets and hotels or perform any transaction that was carried over the telephone in the past. All this can be performed without explicitly logging in, my command for the transaction automatically authenticates me.

Expanding the Reach

A device with a natural language interface would make the internet accessible to a group of users who would otherwise not be online due to technological challenges. For example these devices could be used by seniors, visually impaired users and illiterate or semi-literate people. They could use it to communicate with family, friends and doctors, perform secure online commerce and access emergency services. This would also be a perfect device to learn a foreign language.


But for all the promise shown by the Echo, there are some challenges that need to be overcome before we interface with computers through voice.


Echo has been successful as it does a rather small set of things very well, the interface has been deliberately kept simple by being directive based and non-conversational. Every question has to be prefixed by ‘Alexa’ and the Echo does not remember state between questions or context around them. For example, I would like to follow the question “Alexa, where is ‘Star Trek Beyond’ playing?” with “Get me 2 tickets for 6.00 PM tonight”, but this is not currently possible. The Echo is not able to use the information about the movie from the first question in the second question. As another example, if a camera is present, It would enable Echo to determine if I is talking to it using directionality or even sensing if I’m the only person in the room. I could point to a light and say, “turn it on” without having to explicitly explain to Alexa “it” is the main light in the bedroom .

Additionally, unlike a mobile phone, this is a shared device, where user experience has to be personalized. The device has to be able to switch contexts when talking to different members of my family.

Marketplace and Application Discovery

The most useful device is the one with the widest range of applications. Developers are drawn to a large customer base, cool technologies, ease of development and platforms where they can monetize their applications. Currently the Echo is still in the ‘cool gadget’ category without a compelling application.

A marketplace (a store) has to provide an efficient way to discover new skills and services. A store may have to be built that interacts with a user through a voice interface, building one without visual context is an interesting, challenging problem.

Privacy and Ethics

As a user, I’m concerned that the Echo can hear, see and store my information that may be used for later contextual reference. It is a challenge to store and use this data without compromising my privacy.

I am looking forward to the day when the Echo has evolved past these challenges. On that day, I will be watching my favorite episode of Star Trek when an incoming phone call from my accountant is redirected by Echo to voicemail, alerting me only when I finish watching the episode.

Speeding up the Internet

The internet is a aggregation of multiple interconnected terrestrial, submarine and radio networks. 

Tier 1 networks which are like global freeways, able to deliver to destinations across multiple continents. Multiple tier 1 networks exchange traffic without payments through peering agreements based on reciprocity. Tier 2/3 networks cover a smaller geographical area and get global access by making transit payments tier 1 networks. These networks form the backbone of the internet.  

The last mile is the part of the network that connects connects people's homes and businesses into the larger internet. The last mile is typically part of the ISP’s network. The middle mile is the segment linking the ISP’s core network to the local, last mile networks.

When comparing the internet to a tree,  the tier 1 and 2 networks are the trunk, the middle mile the branches, connecting a very large number of “last mile" leaves. These last mile links are the most numerous and most expensive part of the system.  This is illustrated by the chart below.

  The internet as an inverted tree

The internet as an inverted tree

In each the next sections we look into a major issue affecting latency and ways to improve performance.

Outdated protocols

The core internet protocols and routing mechanisms have not changed significantly in the last 40 years. TCP was built for reliability and to avoid congestion and served those purposes wonderfully well but is starting to show its age. Major issues include

  1. TCP is a round trip protocol and the only way to reduce latency is to reduce round trips.
  2. High overhead to acknowledge every window of data packets sent. This particularly affects streaming video where the distance between server and client constrains download speeds.
  3. Larger networks and greater distances increase Round Trip Time (RTT), packet loss and decrease bandwidth. 
  4. It is not very efficient when handling large payloads like streaming video which in the US accounted for over 70% of internet traffic in late 2015 and is expected to grow to 80% by 2019.  

The chart below illustrates the inverse relationship between RTT and bandwidth. There is an exponential decrease in throughput. This has been derived from this report published by Akamai.

 Increase in latency decreases throughput exponentially

Increase in latency decreases throughput exponentially

There are a number of optimizations to mitigate these problems. These include

  1. Using pools of persistent connections to eliminate setup and teardown overhead.
  2. Compressing content to reduce the number of TCP roundtrips.
  3. Sizing TCP window based on real-time network latency. Under good conditions, an entire payload can be sent by setting a large initial window, eliminating the wait for an acknowledgement. Larger windows can result in increased throughput over longer distances.
  4. Intelligent retransmission after packet loss by leveraging network latency information, instead of relying on the standard TCP timeout and retransmission protocols. This could mean shorter and aggressive retransmission timeouts under good network conditions. 
  5. QUIC, a replacement for TCP built over UDP and HTTP2 include some of these optimizations from ground up.


Congestion happens at many different points on the Internet including peering points, middle and last miles.

Peering points are the interconnections where different networks exchange traffic. Border Gateway Protocol (BGP) is used to exchange routing information between different networks. This older protocol has a number of limitations affecting its ability to keep up with the increase in routes and traffic. Peering points can be deliberately used to throttle traffic and charge for increase in transit capacity as in the case of Netflix vs Comcast (and Verizon).

The middle mile connects the last mile to the greater internet. Last mile which is typically part of the ISP’s network, is the network segment connecting homes and businesses. These last mile links are the most numerous and most expensive part of the system. Goldman Sachs estimated it would cost Google $140 billion to build out a nationwide network or $1000/home on average.

The large capital investment limits competition and creates an imperfect market dominated by few players. Almost a third of US households have no choice for broadband internet service. There is no incentive for these corporations to improve the service due to lack of competition. In locations where Google Fiber has entered the market, other providers have duplicated Google FIber’s price and levels of service

Minimizing long-haul content and computing

Distributing content and computing to the edge presents a unique set of challenges, especially for mobile devices.

  1. Mobile devices are likely to switch between cellular and wi-fi networks, resulting in disconnections.
  2. Mobile applications are personalized and interact through APIs (not static HTML web pages accessed through computer browsers) which cannot be cached as efficiently by CDNs.
  3. Streaming content especially video has to be optimized for different devices and resolutions. The downloadable stream is a function of both the device resolution and the bandwidth availability based on real time network conditions. 

These issues can be addressed by bringing content and computing is as close to the users as possible.

Content delivery networks are effective in reducing latency and increasing throughput by taking content close to the users. Ideally content servers are located within each users ISP and geography. This minimizes the reliance on inter-network and long-distance communications especially through the middle-mile bottleneck of the Internet. Better end-user experiences over cellular networks can be enabled by distributing content and delivery directly inside Mobile Operators core networks.

When computing is moved to the edge, data goes along with it. Data distributed to the edges has to be managed carefully to prevent concurrent modifications. Solutions have to be create to merge data, resolve conflicts automatically or keep source data at source in escrow and update when the computing is complete. 


Latency over the internet is fundamentally limited by the speed of light, aging infrastructure (networks and protocols), lack of competition and increase in streaming video traffic. These problems can be addressed by moving content and computing closer to the edge, building smarter applications using newer protocols and increased competition leading to investments in the last mile internet.  

Food as a Service

I recently subscribed to Gobble, a food delivery service after listening to this a16z podcast. Gobble along with other similar services offer complete dinner kits with step-by-step instructions on dinner preparation. They consistently provide delicious and healthy restaurant quality meals in the 500-700 calorie range. 

Services like this fundamentally change our relationship with food including how we shop, cook and eat. They trend towards efficiency, nutrition and transparency by servicing customers desire to eat healthy, know where ingredients were sourced, while spending less time in the kitchen. 

This is currently a mostly untapped $200B TAM (50M working people * 2 meals/day * $8/target price per meal * 250 working days/year) where a number of startups are being funded. Companies in this space can be broadly categorized as

  1. Restaurant ordering and take out services like DoordashForkable (food delivered at work), GrubHub, Yelp Eat24 , UberEats
  2. Readymade food delivery services like Munchery, SpoonRocket, Sprig
  3. Dinner kit delivery services like Blue Apron, Chef’d, Chef day, Gobble, Hello Fresh, Home ChefPlated

Restaurant ordering and take out services

Restaurant ordering and take out services allow customers to order online and (in certain cases) deliver food. By creating a marketplace for restaurants and customers they streamline the ordering process by removing friction. This is beneficial to both parties, the restaurants have access a larger pool of customers, and the customers to a wider range of restaurants. This service enables the rise of take-out kitchens which focus solely on food prepared for delivery, without the overhead of running a restaurant. 

Readymade food delivery services

Readymade food delivery services allow fresh food to be prepared and delivered at fixed times every day. While this model is well suited for densely populated areas it is not easy to scale.  This is constrained by mass producing a fresh, multi-ingredient, multi process meal and delivering it to customers every single time. To scale this model, cooking might have to be outsourced and curated where it starts to resemble ordering and take out services.

Dinner kit delivery services

Dinner kit delivery services will not replace trips to the grocery store anytime soon. But by eliminating shopping and prepping for a meal they hit the sweet spot between cooking from scratch and ordering take out. This comes with the satisfaction of a healthy, delicious, home cooked meal.

Dinner kit services scale well and lends itself to consolidation, economies of scale and logistical optimization (by shipping in bulk). With Gobble, the cost of 6 meals is ~$70 ($12/meal), estimated shipping costs are $8 and packaging costs are $10 leaving $52 for food, labour and gross margins. Currently products are transferred to distribution centers in bulk and shipped to customers individually. There are opportunities for optimization by sourcing bulk foods locally and processing part of the food closer to the customers.

Subscription model

Both the dinner kit and readymade food delivery services are better suited to a subscription model than the restaurant ordering services in its current form. However there are opportunities to innovate in the restaurant ordering services. For example, a restaurant ordering services could facilitate ordering from 2 different restaurants and consolidate into a single delivery. A restaurant ordering service could create private, curated menus setting their own prices and having them fulfilled by restaurants, especially if they can provide a predictable supply of orders. Taking this one step further, they could offer a subscription service where users can order a fixed number of times a week for a predetermined price.


Companies like Whole foods with retail distribution centers and on-site food processing facilities have two significant advantages in offering dinner kits.

  1. Assembling the final product in store with locally sourced ingredients and centrally prepared sauces.
  2. Retail distribution centers eliminate the last mile distribution problem.

Then there are the big players. Amazon offers free one hour restaurant delivery in select regions through Amazon Prime Now; expect them to enter the dinner kit delivery market through AmazonFresh. With their massive sourcing and distribution network, they can operate efficiently at scale. Uber Eats recently entered the food delivery service, providing curated meals from local restaurants quickly.

In summary, this is a huge market with tremendous opportunities to disrupt and innovate. This is where many battles will be waged and winners prevail by offering a low-cost, high quality, frictionless service.