Blog

  • MobileViews 566: WWDC 2025 & Father’s Day

    In this podcast, Jon Westfall and Todd Ogasawara discussed Apple’s latest Worldwide Developers Conference announcements, noting a significant “tone shift” towards developers. While consumer-oriented features for iPhones, iPads, and macOS devices were unveiled, the speakers highlighted Apple’s clear targeting of developers. A key takeaway for developers was the ability to integrate Apple’s on-device Large Language Model (LLM) into their applications without incurring API fees or requiring a data connection. Jon Westfall, who is developing an app that creates tours from tagged photos, plans to leverage this LLM to generate descriptive text and titles for locations and images.

    The podcast also delved into several new features. iPadOS is receiving a substantial update with improvements to multitasking, including Stage Manager 2.0 for better window management and the introduction of a menu bar. The Journal app, currently on iPhone, will be coming to iPad. A more Mac-like Files app is also expected, though concerns were raised about its integration with third-party cloud services and local storage schemes. Other anticipated features include a Preview app for iPadOS, local audio capture for video conferencing, studio-quality audio recording for AirPods Pro 2 and possibly AirPods 4, a phone app for macOS, and wrist flick gestures for managing calls on watchOS. The speakers also touched upon “liquid glass” visual effects, the “workout buddy” feature in Apple Fitness, the continued lack of significant updates for Siri, and the potential for background tasks to slow down iPads.

    Available via Apple iTunes.
    MobileViews YouTube Podcasts channel
    MobileViews Podcast on Audible.com

  • MobileViews 565: Pre-WWDC; Windows to Linux; OpenAI Codex in ChatGPT


    • Todd Ogasawara shared his struggles with his 2019 HP Envy 360 laptop, which cannot update from Windows 10 due to an unsupported AMD Ryzen processor for Windows 11, and Google FlexOS has also deprecated support for it.
    • He has returned to using Linux, noting that while Linux Mint didn’t work with his Bluetooth mouse, Ubuntu did.
    • Apple Find My is now usable in South Korea as of June 1st, 2025, contrary to previous assumptions about privacy laws.
    • Rumors suggest AirPods Pro 2 and AirPods 4 may gain camera control, sleep detection, and new head gestures for answering calls and dismissing notifications.
    • Jon Westfall observed that many students wear AirPods constantly, even when not actively listening to content.
    • There’s speculation about iPadOS 26 potentially including a menu bar, and both speakers expressed hope for improvements to Stage Manager.
    • OpenAI’s Codex, now available on the plus platform, was discussed for its AI coding capabilities, with Jon sharing an experience of it detecting an “inconsistency” in his code that was an intentional change.
    • The speakers humorously compared the AI’s “helpful” persistence to a “code therapist”.
    • They also joked about AI’s increasing presence and a “Mechanical Turk” scenario where seemingly AI-powered services are actually human-driven.
    • The upcoming WWDC event was mentioned, with anticipation for new features and hardware support announcements.

    Available via Apple iTunes.
    MobileViews YouTube Podcasts channel
    MobileViews Podcast on Audible.com

  • MobileViews 564: Jon’s experience with AI in higher education


    In this podcast, Jon Westfall and I discussed:

    A significant portion of our conversation centered on the continuing proliferation of AI in consumer products. We noted an increasing sense of “AI fatigue”—the saturation of artificial intelligence in nearly every product and announcement. Although I am personally intrigued by developments in AI-generated video and imaging, especially from Google and Meta, I also find the AI trend overwhelming at times. I am even considering subscribing to Google One’s AI Premium offering to further explore these capabilities, particularly for personal creative projects.

    We also speculated on potential announcements from Apple’s upcoming WWDC, especially regarding artificial intelligence and whether Apple will finally deliver tangible AI features, following a less-than-smooth rollout of “Apple Intelligence.” I expressed hope for hardware updates, such as a refreshed Apple Watch Ultra or a more affordable version of the Vision Pro headset—rumored to be called the Vision Air.

    I noted that I recently began revisiting older episodes of this podcast, some dating back to 2008. I’ve started re-editing and publishing select episodes as audiograms. One of these featured an interview with the developers of Google Earth for iPhone, recorded in early 2009—just six months after the App Store’s debut. It was particularly meaningful to hear the voice of my late friend Mike Morton, one of the app’s original developers.

    We also touched on some of my ongoing technology experiments. I’ve been attempting to repurpose a 2019 AMD laptop that no longer supports Windows 11. My initial plan to install ChromeOS Flex was thwarted by hardware incompatibility, so I’ve shifted my attention to Linux Mint. Although I encountered issues related to UEFI preventing boot from a USB drive, I plan to revisit this project soon

    Jon offered a compelling perspective on the evolving role of AI in higher education. He discussed how he and other faculty are adapting to student use of AI tools such as ChatGPT, emphasizing the importance of transparency, responsible use, and pedagogical innovation. Jon’s work in this area demonstrates a balanced, practical approach that integrates emerging technology while preserving academic integrity.

    We concluded the episode with a broader reflection on the societal implications of AI, particularly the concern that up to 50% of entry-level jobs may be impacted in the coming years. As someone no longer in the workforce, I observe these shifts with a mix of concern and curiosity, especially regarding how younger generations will navigate such disruptions. We acknowledged the historical cycles of technological change—from calculators and word processors to broadband and mobile computing—and how each brought both fear and opportunity.

    Available via Apple iTunes.
    MobileViews YouTube Podcasts channel
    MobileViews Podcast on Audible.com

  • MobileViews retro-podcast Jan. 23, 2009 discussion with the original Google Earth for iOS developers

    This podcast was recorded way back on Jan. 23, 2009 (16 years ago) with the original Google Earth for iPhone developer team: my old friend (the late) Mike Morton, David Oster, and (product manager) Peter Burch along with Google spokesperson Aaron Stein. Although the iPhone was launched on June 29, 2007, the iPhone app store was not launched a year later on July 10, 2008. So, iPhone apps had only been available for 6 months when we recorded this podcast. I’m taking advantage of the relatively new Adobe Podcast (V2) audio enhancement and audiogram creation features to re-post this podcast as, I think, one of some historical interest. I also used Google Gemini to write a summary of the podcast as well as a more detailed bullet point discussion list for the blog on MobileViews.com.

    SUMMARY
    In this podcast recorded on Jan. 23, 2009, , the developers of Google Earth for iPhone discussed the creation and features of the mobile application. The team, including iPhone engineers Mike Morton and David Oster, shared insights into the development process. With extensive Macintosh experience, they found the iPhone SDK surprisingly similar to OS X programming, which provided a significant advantage. A long-held dream for the Google Earth team was to enable users to “hold the earth in your hand,” a vision only recently made possible by technological advancements.

    The developers addressed the challenge of optimizing Google Earth for the iPhone’s smaller screen and less powerful CPU. They emphasized streamlining the application by “trimming out some of the fat” accumulated in the desktop version and leveraging years of OpenGL tuning. A key focus was on creating a user-friendly interface that prioritized data display over decorative elements, influenced by Edward Tufte’s principles. The touch interface of the iPhone presented a unique opportunity to create a more intuitive way of interacting with the Earth, leading to the development of custom gesture analysis. Looking ahead, the team plans to continue developing Google Earth for iPhone, adding new features that cater to both existing desktop functionalities and mobile-specific contexts.

    DETAILED DISCUSSION SUMMARY

    • The Google Earth for iPhone development team included Mike Morton and David Oster (iPhone engineers), Peter Burch (product manager for Google Earth), and Aaron Stein (spokesman for Google).
    • Mike Morton and David Oster, who worked on Google Earth for iPhone, have about 25 years of Macintosh experience and have been on the iPhone since programming was opened up in summer 2018.
    • The iPhone SDK was surprisingly similar to programming OS X on a Mac, which was a “leg up” for experienced Mac developers.
    • Development tools for iPhone are based on GCC, allowing the use of C and C++ in addition to Objective-C.
    • Porting Google Earth to the iPhone was a long-standing dream of the Google Earth team, predating the iPhone’s introduction.
    • Technology barriers had previously prevented the realization of holding “the earth in your hand”.
    • Google Earth, originally Keyhole Earth Viewer, has been running as an application since around 2001, providing the team with experience in high-performance graphics applications on lower-powered hardware.
    • The Google Earth for iPhone was a “project project” and not a “20% time project”.
    • Achieving quick response times on the iPhone’s relatively weaker CPU involved significant performance tuning and “trimming out some of the fat” from the desktop version.
    • The fast performance also benefited from about ten years of OpenGL tuning on the desktop version of Google Earth.
    • Development challenges included adapting to the smaller screen size and deciding which features to include or exclude.
    • The team aimed to make the interface simple and uncluttered, with a focus on displaying data rather than decorative elements, influenced by Edward Tufte’s work.
    • Key features added included Wikipedia articles and panoramas of photos, making the product about exploring user content, not just satellite imagery.
    • The iPhone’s touch interface provided a better way of interacting with the earth than the desktop version.
    • The developers had to create their own gesture analyzer because Apple’s SDK provides raw finger position data rather than pre-defined gestures.
    • Unexpected uses of Google Earth for iPhone include its adoption by the scientific community for visualizing weather.
    • Users can provide feedback and ask questions through a help center group, with a link provided in the iTunes description for Google Earth.
    • Google plans future updates for Google Earth for iPhone, adding new features and building on the current version, with a focus on mobile-specific functionalities.
  • MobileViews 563: w/Jared Kuroiwa – Google Android XR, Headsets, and AI Integration


    In episode 563 of the MobileViews Podcast, I’m joined by guest co-host Jared Kuroiwa to a few of the announcements from Google I/O 2025, with a strong focus on Android XR and the new generation of mixed and extended reality headsets.

    Key highlights:

    • Android XR Headsets: Google showcased XR devices from partners like Samsung (Project Moohan) and Xreal (Project Aura). These headsets vary in design—from immersive goggles to stylish glasses—and rely on connected Android smartphones for processing. 
    • Three-Part Requirement: To fully use these devices, users will need (1) a compatible Android phone, (2) XR glasses, and (3) a Gemini AI subscription, adding cost and complexity. 
    • Design & Use Cases: Devices like Xreal’s Aura and Samsung’s headset aim to combine AR displays with real-world usability, offering features like translation, contextual info, and AI assistance—akin to Meta’s Ray-Ban smart glasses. 
    • Local vs. Cloud AI: Jared shares insights into running local LLMs on mini PCs and the promise of lightweight, on-device AI, comparing it to cloud-based tools like ChatGPT and Google Gemini. 
    • Other Tools Discussed: Google Whisk for generative video, the future of XR optics, device compatibility issues, and the role of design in public acceptance of smart eyewear. 

    The episode delivers a deep dive into the near-future of consumer XR, wearable AI, and how tech companies are shaping digital experiences beyond the smartphone.

    Available via Apple iTunes.
    MobileViews YouTube Podcasts channel
    MobileViews Podcast on Audible.com

  • MobileViews 562: MobileViews 562: Using PLAUD NotePin to record and analyze this podcast; Python dev tools; anticipating Google I/O



    For this podcast, Jon Westfall recorded our discussion in parallel and had it create a detailed summary and a kind of mind map. I fed PLAUD’s detailed summary into Google NotebookLM and had it create the condensed summary below:

    They discuss various technological tools and their applications, beginning with their experiences using a wearable transcription device, the Plaud NotePin, for capturing ideas during meetings. The discussion expands to the potential benefits and privacy considerations of recording interactions, touching on the limitations of inexpensive body cams and the potential of smartphones for video evidence. The hosts then explore how AI-powered transcription and summarization services can enhance content consumption and creation, citing examples of using these tools with podcasts and historical audio. They anticipate future AI advancements, particularly in video editing with tools like Google Flow and potential new extended reality (XR) glasses announced at Google I/O. The conversation also covers practical Python scripting for tasks like downloading YouTube transcripts, using development tools, and navigating file-sharing challenges, as well as integrating calendars with an Outlook plugin. Finally, they touch on the capabilities of AI assistants like Microsoft Copilot Vision and the intersection of AI with media and entertainment, referencing the Apple TV+ “Murder Bot” TV series.

    PLAUD NotePin mindmap of this podcast
    Google NotebookLM expandable mindmap of this podcast

    Available via Apple iTunes.
    MobileViews YouTube Podcasts channel
    MobileViews Podcast on Audible.com