The 7 key takeaways from Google I/O 2019
24. May 2019
In May, Google hosted their annual conference for developers, Google I/O, in Mountain View, California. Two of our front-end experts attended in order to learn more about machine learning, speed tools, web apps, privacy and augmented reality.
7 Takeaways from Google I/O
2019 is a big year for Google. Android turned 10 years, Google 20 years and the Internet 30 years.
This year, Google has already launched Android Q along with a new, cheaper series of Pixel phones and new initiatives to meet the growing desire for privacy and data security, while new features in Chrome once again expanded the limits of what can be done on web.
Focus on privacy
Google addressed users’ increasing focus on personal data. They presented new opportunities to manage the user data that they store. In user settings, users can choose to delete data manually or turn on automatic deletion when the data has been stored for 18 or 3 months.
Generally, privacy played a big role in many other new features, for example, Google Assistant’s speech recognition now works locally without the use of Google’s servers, incognito mode will be available in Google Maps, and the way cookies work in Chrome will be altered.
Google Assistant has been reduced in size so its vital features can now be stored locally on the phone rather than in the cloud. This results in significantly increased response rates, more privacy and offline usage.
Another new concept was “Duplex on the web”, which lets the assistant fill in multistep forms online (such as hotel bookings, etc.) for the user automatically, without websites having to adapt anything in their code. It’s very smart!
The speech recognition software that Google Assistant uses has also provided something interesting on Android Q in the form of a new Live Caption feature, now allowing automatic live subtitles to be added to any content played on the phone, i.e. podcasts, videos on social media, or even personal videos from the user’s own album.
The Caption feature has also been used in a new Live Transcribe app that can display subtitles for the conversation taking place in the room around the phone. It is aimed at users who are deaf or hard of hearing, from whom there has been much positive feedback. It is also something that users will be able to enjoy greatly if they wish to watch a video without sound in the bus or train.
Google I/O had significant focus on machine learning with several paths dedicated to getting developers in every field up and running. Google presented TCAV, which creates transparency and identifies bias in machine learning models that can often be obscure. This makes it possible to remove the bias that produces incorrect results. As an example, they used a model that was trained to identify images of doctors. With the help of TCAV, you can see which characteristics the model would look for – for example, “white coat”, “stethoscope” and “man”, of which the latter should of course not be decisive.
Every year, Google uses the I/O conference as a platform to showcase new web features and highlight suggestions as to how the web platform should evolve.
Speed, speed, speed
As always, a great deal of the web content at Google I/O was dedicated to speed optimization. And that’s great! Speed tooling is very interesting for us in terms of measuring and debugging performance. We are currently looking to get it as a cohesive part of our continuous integration.
Lighthouse, Google’s page profiling tool, has once again developed new features. Lightwallet allows developers to put performance budgets on various assets in Lighthouse CI to ensure that you remain at a reasonable level throughout the life of a project. In addition, Google is now allowing third-party plug-ins for Lighthouse, so that different communities can get tailored feedback and optimization suggestions for their particular stack (WordPress will be launching a plugin at first).
In an AMA session with the Chrome team, they acknowledged that you may experience slightly fluctuating Lighthouse scores, and therefore they have dedicated developers to control this. All in all, Lighthouse is striving to become an integrated tool for diagnosing speed challenges on websites.
Google now also extends Firebase Performance Monitoring to web, giving detailed access to performance metrics based on data collected from the visitor’s sessions on a site. This way, you can see the load time spread, etc., and therefore, the tool is a very natural addition to Lighthouse.
WEB APPS ARE APPROACHING NATIVE APPS
The Chrome Team’s Project Fugu presented some of their activities, where Progressive Web Apps gains access to browser APIs that match those available by native apps. This concerns basic features, such as preventing the device’s screen from turning off, using the operating system’s sharing module, or sending notifications in the background at certain times or locations.
Unfortunately, there are some aspects of web that can seem a little clumsy compared to native apps. In order to accommodate this, the Chrome team presented features, such as paint holding and portals.
Personally, I’m looking forward to progressive web apps nearing the possibilities in native. The fact that this part of the web platform continues to evolve puts immense pressure on Apple to more closely integrate it in Safari and iOS (it is often those who drag their feet in this situation). It will eventually mean a lot for web.
Paint holding requires that the browser not display the empty white screen while a new page loads, but instead retain the image of the previous page until the new one has loaded. This is a small change that causes great improvement in user experience.
Portals are a modern take on how one page can link to another. At the forefront is an innovative version of Iframes, where you can embed a viewer from one page inside another and make natural animated transitions between the two. How to embed content using portals as small cards on a page was demonstrated at the conference. Clicking on a portal expands via an animation and take over the entire screen and switch url to the new domain. This way, you get a much smoother experience of linking between two different domains. However, it should be mentioned that both domains must be configured to work properly with portals.
As major innovations on Augmented Reality take place on the Android platform, web is also following this trend. Perception toolkit now provides in-browser detection of QR codes, barcodes and faces (and ultimately much more) when using the camera in a web app. In this regard, Google highlighted the existing project ModelViewer to view models in Augmented Reality.
WebGL is also facing a possible update in the coming years, as work is being done on the new WebGPU spec, which gives more power than is currently administered to graphics programming and image filtering on web and is a better match for modern GPU architecture.
SHOW, DON’T TELL
As a demo project for both speed optimization and graphics programming, a team from the Google game, PROXX, presented a version of minesweeper that is designed to work on a wide range of devices – down to the cheapest feature phones and even for blind people with a screen reader. To accelerate the graphics of the game, they used WebWorkers for logic and WebGL for graphics wherever possible, and made the game as a Progressive Web App so that it can be installed and played directly from the browser offline. This way, PROXX highlights is one of the core benefits of web apps: they can be accessed from virtually any device in the world – expensive and cheap alike.
Finally, one of the announcements that also gave big applause: Google’s crawler bot will now use the latest version of Chrome – and it will eventually become evergreen, so it will continuously update itself.