Menu
Log in


  • 30 Jan 2025 8:46 AM | Terry Findlay (Administrator)

    Joined in January - Peter Forbes

  • 30 Jan 2025 8:41 AM | Terry Findlay (Administrator)

    ADAM ENGST 10 January 2025

    Apple Intelligence has introduced uncertainty into formerly verbatim news article notifications, sometimes producing blatantly erroneous summaries. The company’s response to a formal complaint from the BBC and widespread negative media coverage? It will update the feature to perform better. Jason Snell of Six Colors thinks that’s insufficient. As it stands, apps can’t opt out of having their notifications summarized by Apple Intelligence; Jason is calling on Apple to allow individual apps or similar classes of apps to opt out of notification summarizations. I’m with him on this topic—it’s problematic for Apple to put words in the mouths of others. The Verge’s collection of notification summarization mistakes is reminiscent of auto-correct fails, but at least with those, the user can revert to their original text. With news notifications, Apple Intelligence summarizes a collection of unrelated content, often providing actively unhelpful results.

    These AI summarization mishaps prompted me to think about summaries in general. I’ll admit to a knee-jerk negative reaction whenever I have been offered an option to summarize, whether AI-generated or not. As a fast reader, I was never intimidated by long books in school, and I picked up on my teachers’ disdain for CliffsNotes summaries of classic works of literature.

    Upon reflection, though, my reaction is unfair. While summarization certainly has its problems, dismissing it overlooks something fundamental: summarization isn’t just an overhyped AI feature—it’s core to the human experience.

    Think of summarization as a form of lossy compression, similar to how digital photos are compressed to save space. Both attempt to reduce the amount of data required from the original to convey its meaning. Damage is always done in the process—a JPEG-compressed image loses fine details from the original, and text summaries lose detail and nuance. Romeo and Juliet is more than a tragedy about two young lovers whose devotion to each other defies their families’ bitter feud and ultimately ends in their untimely deaths. Thanks, ChatGPT, for getting it right.

    We accept the loss of detail because one or more constraints often make summaries more practical or useful for specific purposes. The most common constraint is time—you can read that one-sentence summary of Romeo and Juliet in a few seconds, but watching the play or reading the text would take several hours.

    Another constraint is background. Without a solid grounding in physics, you may not get much from reading “On the Electrodynamics of Moving Bodies,” the paper in which Albert Einstein introduced his special theory of relativity. Those of us who lack that background or a desire to achieve such a state—life is finite, and we can only learn so much—are better off knowing that the paper demonstrates that the laws of physics are the same for all non-accelerating observers and establishes the relationship between space and time, fundamentally altering concepts of simultaneity and motion. I hope that’s a reasonable one-sentence summary.

    Physical display space is a third type of constraint. When you look at the list of messages in Mail, that’s a form of summary—reading your email as a single scrolling document would be insane. One of Apple Intelligence’s features enhances the message list to replace snippets from the first few lines of an email message with a summary. These summaries will be more descriptive than the snippet preview, as the preview is just the first part of a message instead of its meat. However, they can suffer from the same sort of errors as the news notifications.

    The value of a summary is, within limits, proportional to the difference in length between the source and the summary. The more compressed the summary, the better—again, within limits.

    Those limits vary by situation—I needed a single sentence for the examples above, but such short summaries lose so much of the originals that they aren’t otherwise all that useful. Asking ChatGPT for longer summaries provides significantly richer results. In other words, there’s always a sweet spot between how tightly the summary compresses the original and how much of the original’s information is retained.

    That value explains my discomfort with Apple Intelligence’s summarization options. Because I read quickly, I see no reason to ask Apple Intelligence to generate a summary of a Web page or a conversation in Mail. The downside of losing detail and nuance—and of possible errors—outweighs the upside of saving a few minutes of reading time. Notification summaries are even worse; for me, they save seconds at most and often introduce confusion by summarizing unrelated news articles or information that has changed multiple times within the summary period. The main utility I see for notification summaries is to reduce the irritation of too many notifications from chatty conversations or overactive apps, but Apple has already addressed that by grouping notifications.

    While AI-generated summaries raise valid concerns, it’s essential to recognize that human-created summaries permeate nearly everything we read. For instance, every email message and discussion forum post has a subject line that’s supposed to summarize the message’s intent. People often write poor subject lines, but they remain an essential form of summary—one that AI could actually help improve.

    That’s just the start. Nearly every article or non-fiction book has a title that is, most of the time, the shortest possible summary the author or editor can think of that is both attractive to a potential reader and accurate to its contents. Many articles, including ours, have short summaries that serve as teasers in a list. All academic papers have built-in summaries in the form of abstracts—I rely heavily on those when researching topics outside my sphere of expertise.

    The need to summarize goes even deeper. Most news articles are themselves summaries of the events they cover. Wikipedia may contain 6.9 million articles, but the average length of an article is a mere 690 words—it’s a collection of summaries. While few people would consider a book to be a summary, most non-fiction titles are distillations of the author’s more extensive research.

    I would even argue that human language is itself a form of summary. There’s a reason we say that we “choose our words”—we’re summarizing the rich, complex, and chaotic thoughts and feelings in our minds into a limited but hopefully understandable collection of words. Just as summaries lose nuance and detail, language often falls short of conveying precisely what we’re thinking. Without full-bandwidth telepathy, it’s the best we have for sharing ideas. Summaries are intrinsic to human expression.

    To summarize—I had to!—summaries offer a different value proposition for everyone. Reading speed, language fluency, topical understanding, display space, and other factors play into how valuable a summary of a particular length will be in any given situation. You should ask for AI-generated summaries only when they will provide actual value and you can verify their accuracy when it matters. Finally, remember that just because something can be summarized doesn’t mean it should be.

  • 30 Jan 2025 8:39 AM | Terry Findlay (Administrator)

    ADAM ENGST 7 January 2025

    In 1993, science fiction author William Gibson famously said, “The future is already here—it’s just not very evenly distributed.” Gibson’s quote applies perfectly to Waymo’s robotaxis: self-driving cars that ferry you around like a driverless Uber or Lyft.

    In fact, Waymo, owned by Google parent Alphabet, is barely distributed at all right now. You can only hail a Waymo in Phoenix (where it provides service across 315 square miles), San Francisco (55 square miles), and Los Angeles (80 square miles), with Atlanta, Austin, and Miami coming soon. Even in its markets, Waymo currently operates only on city streets, not freeways, limiting its ability to handle many longer or commonly used routes.

    Riding with Waymo

    Evenly distributed or not, Waymo offers a clear view of the future of driving. Tonya and I spent part of the holiday break visiting my sister in the Bay Area, and she treated us and our son Tristan to a pair of Waymo rides in San Francisco that were as much about experiencing the technology as getting around the city. A decade ago, we took our first ride with a now-defunct ridesharing service called Sidecar; I was amused and somewhat chagrined to discover that our article had a roughly similar title and began with the same quote (see “Travelling to the Future, on the Internet,” 24 June 2014).

    We’ve come a long way since then, but the overall experience wasn’t too dissimilar, apart from the lack of a driver. Just as we had in 2014, my sister pulled up the Waymo app and asked for a pickup, which took just a few minutes.

    Waymo app showing a pickup

    We happened to be at a hotel, so the only confusion was that three other Waymos were doing dropoffs and pickups in the same block. Although all Waymos look identical—they’re white Jaguar I-PACE electric SUVs—they have a dome on top that houses the sensor array (including the all-important 360º LiDAR sensors) and displays the initials of the person who hailed the vehicle. Waymo operates about 300 cars in San Francisco and said it averaged about 4300 trips per day in May 2024.

    A Waymo car

    The rides were essentially perfect. The car navigated San Francisco’s hilly and crowded streets with aplomb. At various times, it backed up to let an SUV in front of us back into a parking spot, paused at an intersection to let a jaywalker finish crossing, and correctly avoided a bike messenger swerving in and out of parked cars.

    The app experience was as expected and much like using Uber or Lyft, albeit with buttons that let us control the music and ask the car to pull over. We didn’t try the latter, but I imagine it’s so people feel like they can always get out if necessary.

    Waymo app during a trip

    During the rides, we were agog, chattering about how it was fascinating to watch the wheel turn on its own, how it turned the wipers on for us since it didn’t need to see out the windshield, and how it dealt with each slightly unusual traffic situation. We also enjoyed the car’s screens, which showed our route along with real-time representations of the vehicles and pedestrians surrounding it. Thanks to LiDAR, the car could discern far more about what was happening around us than we could. I’ve driven in San Francisco a handful of times and would have found navigating the traffic conditions somewhat stressful.

    Waymo screen showing cars and pedestrians

    The trips to and from where we had parked our car cost $11 and $17; the difference was due to surge pricing for the second trip. A comparable Uber or Lyft ride would have been priced similarly.

    Waymo Safety

    Of course, Waymo is not perfect, and there have been well-publicized mistakes, such as the Waymo that drove in circles in a parking lot for a few minutes (though I wonder why the guy didn’t tap the Pull Over button) and Washington Post tech columnist Geoffrey Fowler’s complaint about Waymos not stopping for him in a crosswalk.

    However, these missteps highlight an important advantage of autonomous systems: once Waymo fixes the bug that caused the parking lot circling or tweaks the system to do better with crosswalks, the entire fleet benefits from those improvements. If only teenagers could be updated so effortlessly!

    It’s already doing much better than humans. A study by the global reinsurance company SwissRe examined Waymo’s road incidents across the 25 million miles it has driven and compared the number of incidents that could have resulted in a liability claim against the rate of claims by human drivers in the same cities. Waymo had an 88% reduction in property damage claims and a 92% reduction in bodily injury claims—it was involved in just nine property damage claims and two bodily injury claims, one of which was caused by a human driver who was fleeing police, ran a red light, and hit the Waymo, another car, and a pedestrian.

    Waymo is expanding slowly and cautiously, probably as much from the worry about bad PR as the need to learn new environments and situations. I’m looking forward to seeing where we’re at in another decade. With luck, the technology will be far more evenly distributed, including in places with lousy winter weather.

    In the meantime, if you get a chance to use Waymo in Phoenix, San Francisco, or Los Angeles, I strongly encourage you to do so. It’s magical.

  • 30 Jan 2025 8:38 AM | Terry Findlay (Administrator)

    ADAM ENGST 21 January 2025

    I’m slightly conflicted about wearing my Apple Watch while sleeping. I don’t suffer from sleep apnea or have any particular trouble sleeping, and being informed about occasional bouts of early morning insomnia doesn’t tell me anything I didn’t already know. Learning that I spent an average of 7 hours and 32 minutes asleep every night over the last month is slightly interesting but not particularly actionable.

    (Obviously, your mileage may vary. Many people have sleep issues and may benefit from connecting poor sleep nights with particular foods, alcohol, or activities.)

    However, some of the data that the Apple Watch collects during sleep is more valuable. I’ve paid more attention to my heart rate ever since a fainting incident in 2022 while climbing a via ferrata route at Whistler in British Columbia. I was at 7000 feet, cold, wet, and exercising, which should have increased my heart rate and blood pressure. Instead, I experienced a “paradoxical vasovagal response,” where my heart rate and blood pressure suddenly dropped, and I passed out on a ledge—happily, while clipped into the steel cable. It was a lot of fuss for what turned out to be a fluke incident.

    Until then, I hadn’t realized my heart rate could drop problematically low. Now that I regularly wear the Apple Watch at night, the Health app sometimes warns me that my heart rate has fallen below 40 bpm for more than 10 minutes. I’m unperturbed by occasional warnings, but if they were to become more frequent, another discussion with the cardiologist might be warranted.

    I was thus intrigued when Apple announced that watchOS 11 would feature a new Vitals app for tracking heart rate, respiratory rate, wrist temperature, blood oxygen, and sleep duration. Only respiratory rate and wrist temperature were new, but Apple suggested that combining all five could provide a greater understanding of daily health status.

    Vitals app on Apple Watch and Health app description

    The question is, what would it tell me? If I were training hard, some abnormal vitals might suggest adjusting a tough workout, but that’s not a concern at the moment, and relatively few people exercise intentionally enough to care. I suspect most, like me, are more curious about common diseases like the flu, COVID-19, and colds. Would the Vitals app be able to detect that I was coming down with something?

    Apple doesn’t want to admit this, saying, “Vitals is not designed to detect illness or a medical condition.” Those feel like weasel words from lawyers designed to absolve Apple of liability and avoid making claims to which the FDA might take exception. But who wouldn’t wonder if measurements beyond one’s baseline suggest an oncoming illness or medical condition?

    Whether or not Vitals is designed to detect illness, can it do so? The answer—in my only opportunity to test the feature so far—is yes, although the news came as no surprise since I had already decided I was sick before receiving the first abnormal report. Nevertheless, learning that my vitals were out of whack might have been helpful if the timing had been slightly different.

    On Sunday of last week, I didn’t get as much sleep as usual due to having to wake up early to direct a track meet. I was on my feet from 7:30 AM to 2:30 PM, so I wasn’t surprised to be tired afterward. But when I didn’t feel any better by dinnertime, I suspected I was getting sick. Indeed, when I woke up late on Monday morning, the Vitals app reported four outliers: increased heart rate, respiratory rate, wrist temperature, and sleep duration. (Blood oxygen remained stable, which seems positive.) That evening, I tested positive for COVID-19. The symptoms weren’t terrible, but Tuesday and Wednesday also showed elevated respiratory rate and wrist temperature. Thursday’s numbers were all back in range, and while I hadn’t fully recovered, the symptoms had abated even further.

    Vitals charts in Health

    The three days I was most sick are glaringly obvious in the weekly Vitals chart in Health (above left). They stand out even more in the six-month chart (above right), which shows that the only other outliers I’ve had since September were a couple of nights when I couldn’t sleep and read for a few hours. In other words, the only real outliers have been when I was ill.

    It’s presumptuous to draw significant conclusions from a single data point, but I plan to keep wearing the Apple Watch at night to see if outlying vitals might indicate illness again. How about you? If you sleep with the Apple Watch on your wrist, has the Vitals app told you anything you didn’t already know?

  • 30 Jan 2025 8:36 AM | Terry Findlay (Administrator)

    Charles Martin | Jan 12, 2025

    If only all scam calls and text messages were this easy to spot.

    Apple's Messages app has a built-in safeguard to prevent links or phone numbers in unsolicited messages on iPhones from being clickable, and now scammers are trying to trick the unwary into enabling them.

    By default, if you receive a text message on an iPhone or other Apple device from an unknown sender, any links therein are disabled. Once you reply to a message, however, the Messages app then allows clickable links, reports Bleeping Computer.

    Scammers and other threat actors have developed a way around this restriction that savvy users will spot easily, but novice users might fall for. Often, this "smishing" attack comes in the form of a notice of an unpaid bill for a small amount, or a "failed delivery" notification.

    The key to these new scam "warnings" is that they will often ask the recipient to reply "Y" or "N" or some variation in a reply immediately. They will instruct the user to reply, then exit the chat and return to their message in order to click a now-enabled scam link.

    Protecting yourself and others from text scams

    If the user falls for this trick, the floodgates of other scam messages will quickly follow, now with clickable links and alarming messages that require the user to click those links. Sometimes, the sender will appear to be affiliated with Apple or other tech companies.

    Examples of scam texts. Image credit: Bleeping ComputerExamples of scam texts. Image credit: Bleeping Computer

    The first thing to do if one has fallen for this trick is to block and report the email address or phone number sending the scam messages. The second thing to do is keep a wary eye out for similar messages from other numbers or email addresses, and block and report them as soon as they are received.

    The third thing to do is to think of any friends, colleagues, or family members that might also fall for this sort of smishing attack. Let them know what to do if they receive similar messages, and to spread the word to people they think might fall for such a scam.

    Such scams often use the scare tactic of a "missing" parcel or an unpaid bill to get users to click scam links. If the user falls for this, the resulting legitimate-looking scam site generally requires the user to enter credit card or bank account information to "pay" a modest fee.

    But that's not what happens. Within minutes or hours, the credit card will be maxed out, or the bank account emptied. In the US alone, some $9 billion was stolen from scam victims in 2022.

    Warn those in your contacts that might be vulnerable to such a scam to be extremely cautious if they receive any unsolicited text from any person or entity where an included link has been disabled. Do not reply in any way to the message, just block and report it instead.

    If you or someone you know has any doubts that perhaps the message was legitimate, encourage them to contact the sending entity directly by other means to verify that they sent such a text.

  • 30 Jan 2025 8:33 AM | Terry Findlay (Administrator)

    Charles Martin | Jan 15, 2025

    Notes can record audio and provide transcriptions, starting with iOS and iPadOS 18.

    The Notes app in iOS 18 and iPadOS 18 makes it easy to add an audio recording to a note, and create a written transcription of it if desired. Here's how to do it.

    It has long been possible to add an audio recording to a note created in the Notes app, but in earlier iOS versions it was a little more cumbersome. Users would open the Voice Memos app, record the audio, and then attach that recording to a new note in the Notes app.

    As of iOS 18, that functionality is directly available in Notes — though it is still somewhat hidden until you know how to find it. The big change compared to the previous Voice Memos app is that Notes can now also provide a written transcript of what was said, if you are using an iPhone 12 or later.

    It is important to note that the audio transcription feature is available only for various versions of English. This includes US and UK versions, along with Australian, Irish, New Zealand, and South African.

    This new feature will be a godsend to students, board members, employees, and the secretaries of organizations everywhere.

    Being able to quickly and easily reference a written version of what was said in a meeting or classroom will help users retain the information better. They will also be able to summarize key points, and separate action items from other information.

    Having the original audio to review is also useful. As with audiobooks, re-listening to a speech or lecture can add the speaker's tone, passion, and context to their words, making it come alive in a way that a straight transcript cannot.

    While not always 100 percent accurate, the transcript feature will make your notes more valuable to you, and more shareable with others. On devices capable of running Apple Intelligence, the summarizing feature within it can create the summary itself if the user desires. 

    Recording inside the Notes app

    When you first open a new note in Notes, you'll see a plus button on the lower right side of the note, just above the on-screen keyboard. Tapping it will pop up a set of tools to use in your note.

    These include font controls, bullet lists, table tools, an attachment button, drawing tools and, if available, an Apple Intelligence button.

    • Tap on the attachment button.
    • A menu of options comes up, including "Record Audio."
    • Tapping that will cause a "New Recording" screen to appear, just as it does in the Voice Memos app — which is still available as a separate app.
    • To start a recording, press the red button at the bottom of the screen, and check that the iPhone's mic is picking up your voice.

    Three iPhone screens display a Doctor Who themed audio panel with waveform, transcription, and playback options.Three iPhone screens show audio editing with waveforms, playback controls, and a transcript. The last screen displays a saved note with text from the audio.

    You can pause the recording to collect your thoughts and then resume, or just record an entire meeting, lecture, or panel directly.

    • Press the record button again to stop the recording.
    • To the left of the record button is a "word bubble" button with quote marks in it.
    • Tapping that button will provide a real-time transcript of the audio.

    Adding and viewing the transcript, and more

    You can also wait until the recording is done, and tap the Done button to the right of the record button to generate a transcript. The full transcript will then appear in a new window, with an audio block in gray and the first couple of lines of the transcript along with a "play" button.

    To add the recording to your note:

    • Look for the "three dots" icon at the top right, and tap it.
    • Tap "Add Transcript to Note."
    • You can then edit the transcript to correct any errors.

    You also have another option in that same menu to copy the transcript, which allows you to paste it directly into another program — such as a word processing app, a blog post, or other options.

    You can also record and transcribe audio on any iPad model that supports iPadOS 18 or later. Audio transcription from the Notes app is also available on any Mac with an M1 processor or later, and running macOS Sequoia or later.

    If it is available on your Mac, you can also use Apple Intelligence tools to summarize the transcript, proofread it, or rewrite portions in a different style. 

    An iPad displaying a note-taking app with a summary about Mughal and Rajasthani architectural features on the screen.With Apple Intelligence, you can summarize a transcription directly in Notes.

    Apple Intelligence is currently available on the iPhone 15 Pro or Pro Max, or the iPhone 16 models. It's also available on iPads using the A17 Pro or M1 or later chips running iOS 18 or later, and Macs running macOS Sequoia 15.1 or later, and using an M1 chip or better.

  • 30 Jan 2025 8:29 AM | Terry Findlay (Administrator)

    Charles Martin | Jan 26, 2025

    A mockup of what an Apple home hub device could look like.

    A leaker with an excellent track record says that Apple is still on track to release its first dedicated smart home controller with an iPad-like screen in the second half of 2025. Here's what to expect from the home hub.

    Mark Gurman's Power On newsletter on Sunday is the latest to chime in on a timeline for Apple's Home Hub. According to the newsletter, the new device, still expected in 2025, will be "the first step toward a bigger role in the smart home" and Apple's "most significant release of the year."

    Previous reports from long-time Apple supply chain analyst Ming-Chi Kuo have suggested the home hub will enter mass production later this year. It is thought to include a high-quality speaker and camera for FaceTime calls, and could be sold as either wall-mountable or with a standalone base as desired.

    Apple is thought to also be bringing out supplementary smart home accessories, such as dedicated indoor security cameras and a doorbell that can use Face ID, also according to the newsletter.

    Rumors suggest previously said that the device will sport an A18 chip and at least 8GB of RAM, so as to run Apple Intelligence. The expected built-in camera and mic may also double as a security camera itself.

    The device is expected to unify the company's push for a much-improved Siri home assistant, use of the Thread and Matter standards for control of smart home devices, and its existing Home software interface. It remains to be seen if Apple will include any iCloud storage offers for security footage.


    Rounding up the rumors

    The purpose of the device would be to leverage an improved Siri and on-screen widgets to act as a master control for smart home devices, similar to how the Home app on other Apple devices works now. 

    That said, the home hub is expected to work with nearly any third-party device that supports the Matter and Thread secure communication standards. It would leverage Apple Intelligence and Siri for automation and control, and possibly run on a dedicated "homeOS" with a focus on widget-based controls for individual devices.

    Matter is a key standard that will allow the home hub to work with third-party devices.Matter is a key standard that will allow the home hub to work with third-party devices.

    It's unclear if the new device will use the HomePod branding as suggested by earlier rumors, or move to an entirely new name, such as "Apple HomeHub." It would be Apple's entry into a market currently dominated by Amazon's Echo Show and Google's Nest Hub.

    Commands could also be given to the device using Siri on existing Apple devices, from the iPhone to the HomePod, so that users who are away or in another part of the home could still control the hub. Apple is also rumored to be launching a new lineup of HomePod minis in the second half of 2025.

    A price target for the home hub has not yet been suggested in the various rumors, apart from claims that the base unit would be significantly less expensive than an iPad.

Powered by Wild Apricot Membership Software