The Future of TV and Quick Access Wallets: Google’s Gemini and Android innovations
The tech world is abuzz with anticipation for the upcoming Consumer Electronics Show (CES) 2025, and Google has already dropped some exciting news. The company is poised to revolutionize the way we interact with our televisions by bringing its powerful Gemini AI models to Google TV. This move promises a more intuitive and helpful TV experience, seamlessly integrating with the existing Gemini ecosystem on phones, tablets, headphones, and soon, Wear OS smartwatches.
Imagine effortlessly navigating through your vast media library, no more tedious scrolling or complicated searches. With Gemini on Google TV, finding the perfect movie or show becomes as simple as asking a question. But the enhancements go far beyond mere search functionality. Google envisions a future where your TV becomes a hub for knowledge and exploration.
Imagine asking questions about history, science, or current events and receiving comprehensive answers, complete with relevant video clips for added context. This echoes the ongoing testing of the Gemini-powered Google Assistant on Nest Mini and Audio devices, where the large language model (LLM) delivers detailed, AI-driven responses to general knowledge queries. This includes more natural-sounding voices, the ability to ask follow-up questions, and the flexibility to interrupt responses with new inquiries.
The integration of Gemini into Google TV also unlocks a new level of personalization and interactivity. Picture creating custom artwork with your family directly on the TV screen, controlling your smart home devices while the TV is in ambient mode, or even getting a concise overview of the day’s news.
This builds upon previous innovations like AI screensavers and AI-generated summaries for movies and shows, further enhancing the overall viewing experience. While Google has only offered a sneak peek of what Gemini can do for televisions, the rollout is expected to begin later this year on select Google TV devices. This suggests a phased approach, allowing Google to refine the technology and ensure a smooth transition for users.
Beyond the living room, Google is also exploring ways to streamline access to digital wallets on Android devices. A new feature under development suggests a potential shortcut for launching Google Wallet using the double-tap power button gesture.
Currently, many Android phones utilize this gesture for quick access to the camera, a handy feature for capturing spontaneous moments. While some manufacturers allow customization of this gesture, Google Pixel phones have traditionally been limited to the camera function. However, this may soon change.
Deep within the second developer preview of Android 16, a new configuration labeled “config_walletDoubleTapPowerGestureEnabled” has been discovered. This suggests that a double tap of the power button could be configured to launch the default wallet app, which, on Pixel phones, would be Google Wallet.
This builds upon the Android 15 update, which gave users the ability to choose their default wallet app through the settings menu.1 This new gesture would presumably respect this user preference, launching whichever app is designated as the default.
While the discovery of this configuration is intriguing, the exact implementation remains unclear. It’s unknown whether Google will create a new settings menu specifically for this gesture or integrate it into the existing camera shortcut settings. Further exploration of future Android builds will be necessary to uncover the finer details.
Another question mark hangs over the intended target of this feature. While it seems likely to be aimed at phones, there’s a possibility it could be intended for other form factors, such as smartwatches. However, given that many Wear OS smartwatches already use a double-tap power button gesture for wallet access, this seems less likely. The purpose of this configuration remains somewhat ambiguous if intended for wearables.
These developments from Google point towards a future of more intelligent and user-friendly technology. Gemini’s arrival on Google TV promises to transform the way we interact with our televisions, while the potential wallet shortcut on Android devices aims to simplify everyday transactions. As technology continues to evolve, Google is at the forefront, pushing the boundaries of what’s possible and striving to create a more seamless and intuitive user experience.
Personalized Audio Updates: Google’s new “Daily Listen” experiment
Imagine starting your day with a concise, personalized audio briefing tailored to your interests. This is the premise of Google’s latest Search Labs experiment, “Daily Listen.” This innovative feature leverages the power of AI to curate a short, informative audio summary of the topics and stories you follow, offering a fresh way to stay updated.
Daily Listen isn’t just another podcast app. It’s deeply integrated with Google’s understanding of your interests, gleaned from your activity across Discover and Search. By analyzing your searches, browsing history, and interactions with news articles, Daily Listen crafts a unique listening experience, delivering a personalized overview in approximately five minutes.
This personalized audio experience is seamlessly integrated into the Google app on both Android and iOS. You’ll find it within the “Space” carousel, conveniently located beneath the search bar. The Daily Listen card, clearly marked with the date and the label “Made for you,” serves as your gateway to this personalized audio feed. Tapping the card opens a full-screen player, ready to deliver your daily briefing.
Emblazoned with the Gemini sparkle, a visual cue indicating the use of Google’s advanced AI model, Daily Listen presents a text transcript in the space typically reserved for cover art. This feature not only enhances accessibility but also allows users to quickly scan the key points of each story. Recognizing that generative AI is still evolving, Google encourages user feedback through a simple thumbs up/down system, enabling continuous improvement of the feature’s accuracy and relevance.
The player interface is designed for intuitive navigation. A scrubber with clearly defined sections allows you to jump between stories, while standard controls like play/pause, 10-second rewind, next story, playback speed adjustment, and a mute option provide complete control over your listening experience. If you prefer to silently review the content, the transcript is readily available.
At the bottom of the screen, a scrollable list of “Related stories” provides further context and depth for each section of the audio summary. A “Search for more” option allows you to dive deeper into specific topics, and the familiar thumbs up/down feedback mechanism allows you to further refine the system’s understanding of your interests. As you browse these related stories, a minimized player remains docked at the top of the screen, ensuring easy access to the audio feed.
This exciting experiment is currently available to Android and iOS users in the United States. To activate Daily Listen, simply navigate to Search Labs within the Google app. After enabling the feature, it takes approximately a day for your first personalized episode to appear. This isn’t Google’s first foray into experimental features within Search Labs. Previously, they’ve used this platform to test features like Notes and the ability to connect with a live representative.
Beyond the Daily Listen experiment, Google is also expanding the capabilities of its Home presence sensing feature. This feature, which helps determine Home & Away status and triggers automated routines, is now being tested to integrate with “smart media devices.” This means that devices like smart speakers, displays, TVs (including those using Google streaming devices), game consoles, and streaming sticks and boxes can now contribute to presence sensing by detecting media playback or power status.
This integration provides a more comprehensive understanding of activity within the home. For example, if the TV is turned on, the system can infer that someone is likely present, even if other sensors haven’t detected movement. This enhanced presence sensing can further refine home automation routines, making them more accurate and responsive.
This experimental feature can be found within the Google Home app under Settings > Presence Sensing. A new “Media Devices (experimental)” section appears below the existing options for phones, speakers, and displays. Devices like Chromecast, Chromecast with Google TV, and Google TV Streamer are currently included in this test.
This media device integration is part of the Google Home Public Preview, which also includes other ongoing experiments like the rollout of Admin and Member access levels for Google Home, testing Gemini in Google Assistant on Nest devices, and exploring “Help me create” for custom automations. These developments signify Google’s ongoing commitment to enhancing the smart home experience and providing users with more personalized and intuitive tools.
How Google Photos might revolutionize photo organization
For many of us, our smartphones have become the primary keepers of our memories. We snap photos of everything – family gatherings, breathtaking landscapes, everyday moments that we want to hold onto. But as our photo libraries grow, managing them can become a daunting task.
Scrolling endlessly through a chaotic jumble of images isn’t exactly the nostalgic experience we’re hoping for. That’s where apps like Google Photos come in, offering tools to help us make sense of the digital deluge. And it seems Google is gearing up to give us even more control over our precious memories.
Google Photos has long been a favorite for its smart organization features. Its AI-powered capabilities, like facial recognition and automatic album creation, have made it easier than ever to find specific photos. One particularly useful feature is “Photo Stacking,” which automatically groups similar images, decluttering the main photo feed.3 Imagine taking a burst of photos of the same scene; Photo Stacking neatly bundles them, preventing your feed from becoming overwhelmed with near-identical shots.4 However, until now, this feature has been entirely automated, leaving users with little say in which photos are grouped. If the AI didn’t quite get it right, there wasn’t much you could do.
But whispers within the latest version of Google Photos suggest a significant change is on the horizon: manual photo stacking. This potential update promises to hand the reins over to the user, allowing us to curate our own photo stacks. What does this mean in practice? Imagine you have a series of photos from a family vacation. Some are posed group shots, others are candid moments, and a few are scenic landscapes from the same location. With manual stacking, you could choose precisely which photos belong together, creating custom collections that tell a more complete story.
This shift towards user control could be a game-changer for photo organization. Currently, if the automatic stacking feature misinterprets a set of photos, you’re stuck with the results. Perhaps the AI grouped photos from two slightly different events, or maybe it missed some subtle similarities between images you wanted to keep together. Manual stacking would eliminate these frustrations, allowing you to fine-tune your photo organization to your exact preferences.
While the exact implementation remains to be seen, we can speculate on how this feature might work. It’s likely that users will be able to select multiple photos from their main view and then choose a “Stack” option from the menu that appears at the bottom of the screen – the same menu that currently houses options like “Share,” “Favorite,” and “Trash.” This intuitive interface would make manual stacking a seamless part of the existing Google Photos workflow.
The implications of this potential update are significant. It’s not just about decluttering your photo feed; it’s about empowering users to tell their stories more effectively. By giving us the ability to manually group photos, Google is essentially providing us with a new level of creative control over our memories. We can create thematic collections, highlight specific moments, and curate our photo libraries in a way that truly reflects our personal experiences.
This move also speaks to a larger trend in user interface design: giving users more agency. Instead of relying solely on automated systems, developers are increasingly recognizing the importance of providing users with the tools to customize their experience. Manual photo stacking in Google Photos perfectly embodies this principle, putting the power of organization directly into the hands of the user.
While this feature is still in the development stages, its potential impact on how we manage and interact with our photos is undeniable. It promises to transform Google Photos from a simple photo storage app into a powerful storytelling tool, allowing us to connect with our memories in a more meaningful way. As we await further details and the official rollout of this feature, one thing is clear: the future of photo organization looks brighter than ever.
Android
Android Auto expands horizons with 13.5 update and Pixel devices receive January 2025 security patch
The world of in-vehicle technology is constantly evolving, and Google’s Android Auto is keeping pace with its latest beta update, version 13.5. This release marks a significant step forward in inclusivity, broadening support beyond traditional cars and addressing some long-standing oversights. Meanwhile, Google has also rolled out the January security patch for its Pixel devices, ensuring users remain protected against the latest vulnerabilities.
One of the most noticeable changes in Android Auto 13.5 is the shift in terminology from “car” to “vehicle.” This seemingly small tweak reflects a broader commitment to supporting a wider range of transportation modes. The update explicitly mentions motorcycles within its code, signaling a move to cater to riders who have been utilizing the platform for some time.
This means that phrases like “Connected cars” are now “Connected vehicles,” and the “Connect a car” button has been appropriately updated to “Connect a vehicle.” This change may seem minor, but it represents a significant shift in perspective and a more inclusive approach to in-vehicle technology. It acknowledges that the road is shared by more than just four-wheeled automobiles.
Beyond the change in wording, the update also brings some exciting developments under the hood. New icons specifically designed for motorcycles have been added, along with assets for various vehicle brands, including Geely, Leap Motor, Fiat, and Lucid Motors.
The inclusion of Lucid is particularly noteworthy, as the company previously announced that its Lucid Air model would gain Android Auto support in late 2024. While the update hasn’t officially rolled out for Lucid vehicles yet, its presence in the Android Auto 13.5 beta suggests that the final certification is imminent. This hints at a closer integration between Android Auto and the growing electric vehicle market.
This expansion beyond traditional cars is a welcome development. For years, the term “car” within the Android Auto interface felt limiting, failing to acknowledge the diverse landscape of personal transportation. By embracing the broader term “vehicle,” Google is not only improving the user experience for motorcycle riders and other non-car vehicle owners but also positioning Android Auto as a more versatile and adaptable platform for the future of mobility.
While details about other in-development features, such as “Car Media,” remain scarce, the 13.5 update clearly demonstrates Google’s ongoing investment in Android Auto. This update lays the groundwork for a more inclusive and comprehensive in-vehicle experience.
In other news, Google has also released the January security patch for its Pixel lineup. This update addresses a number of security vulnerabilities, ensuring that Pixel users remain protected from potential threats. The update is rolling out to a wide range of Pixel devices, including the Pixel 6, 6 Pro, 6a, 7, 7 Pro, 7a, Tablet, Fold, 8, 8 Pro, 8a, 9, 9 Pro, 9 Pro XL, and 9 Pro Fold.
The January security patch includes fixes for 26 security issues dated 2025-01-01 and 12 issues dated 2025-01-05. These vulnerabilities range in severity from high to critical, underscoring the importance of installing the update promptly. Google’s dedicated security bulletin for its devices also lists an additional security fix.
The update is being distributed through both factory and OTA (Over-The-Air) images. Users should receive a notification on their devices prompting them to download and install the update. The update size can vary, but on a Pixel 9 Pro, it was observed to be a substantial 93.22 MB.
Specific build numbers for various Pixel models and regions have also been released, allowing users to verify they have received the correct update.
This concurrent release of Android Auto 13.5 and the January Pixel security patch showcases Google’s commitment to both innovation and security within its ecosystem. By expanding the reach of Android Auto and prioritizing user safety with timely security updates, Google continues to enhance the overall user experience for its customers. The focus on inclusivity in the Android Auto update, along with the consistent security measures for Pixel devices, demonstrates a holistic approach to technology development.
-
Apps11 months ago
Gboard Proofread feature will support selected text
-
News11 months ago
Samsung USA crafting One UI 6.1.1
-
News10 months ago
Breaking: Samsung Galaxy S22 may get Galaxy AI features
-
News10 months ago
Samsung Galaxy S23 Ultra with One UI 6.1 and all S24 AI features revealed
-
News11 months ago
One UI 6.1 Auracast (Bluetooth LE Audio) feature coming to many Samsung phones
-
News10 months ago
Satellite SOS feature coming to Google Pixel phones, evidence leaked
-
Apps8 months ago
Google’s fancy new Weather app is finally available for more Android phones
-
News11 months ago
Google Pixel evolves as Europe’s third best selling flagship