Black Breaking News

Bee review: I outsourced my memory to AI and all I got was fanfiction


For every memory seared into my brain, there are thousands of others I either can’t retain or trust. I spent the last eight months forgetting to fix a homeowner association (HOA) violation despite numerous reminder emails. My cousins and I have been trapped in our own version of Akira Kurosawa’s Rashomon over who said what at grandma’s funeral. Cursed with the working memory of a goldfish, I’ve apologized dozens of times to everyone for failing to do the things I said I would.

These are the problems that Bee, a $50 AI wearable, aims to solve.

$50

The Good

  • Good at broadly summarizing themes in your life
  • Most helpful at summarizing meetings
  • Can help you remember to do random tasks
  • Good battery life
  • It’s only $50

The Bad

  • Fact-checking your memories is a dystopia I’m not ready for
  • Struggles to reliably differentiate speakers
  • It listens to all your conversations
  • Several first-gen quirks
  • iOS only for now

Unlike the Rabbit R1 or the Humane AI Pin, Bee isn’t a flashy gizmo designed to replace your smartphone. Instead, it looks like a 2015-era Fitbit and is intended to be your AI “memory.” You strap it onto your wrist or clip it onto your shirt. It’ll then listen to all your conversations. Those conversations get turned into transcripts, though no audio is saved in the process. Depending on your comfort level, you can permit it to scan through your emails, contacts, location, reminders, photos, and calendar events. Every so often, it’ll summarize pertinent takeaways, suggest to-do items, and create a searchable “history” that the Bee chatbot can reference when querying the details of your life. At 8PM, you’ll get a daily AI-generated diary entry. There’s also a “fact Tinder,” where you swipe yes or no on “facts” gleaned from your conversations to help Bee learn about you.

1/11

Let’s play ‘Guess Which Facts Are Actually Facts.’ Hint: this one hasn’t been true for 19 years.

So if your HOA emails you for the 20th time about a faulty smoke alarm, it might suggest that as a to-do item. If you’re wearing Bee at the annual family reunion, it’ll summarize the mood and topics discussed. Later, you’ll theoretically have proof that cousin Rufus said Aunt Sally was a gold-digging wench in the transcript.

There’s a glimmer of a good idea here. But after a month of testing, I’ve never felt more gaslit.

I wore the Bee to a demo for the BoldHue foundation printer. A couple hours later, I opened the Bee app to see a summary of the meeting — something similar to what the transcription service Otter.ai does when I upload audio files. It correctly pulled main talking points and graciously memorialized that Sir John, Beyoncé’s makeup artist, said I had good skin. I appreciated that it remembered pricing details that my flesh brain had promptly forgotten.

It also got the name of the product completely wrong.

The Bee AI wearable in the yellow wristband surrounded by several yellow objects, such as a telephone, alarm clock, camera, rubber duck, and a toy bicycle.

Bee looks an awful lot like a 2015-era Fitbit. No screen, just mics and a button.

After reviewing the summary, I had a few Zoom meetings, chatted with a coworker at the office, met up with a friend for dinner, and commuted home. Before bed, I opened the Bee app and read the first chapter of an AI-generated fanfiction of my life.

“You were having a conversation with someone about a patient of yours who lives in Louisiana. The patient appears to be causing harm to another person.”

“Victoria and her friend were driving, reminiscing about childhood memories. They talked about a place called ‘Petey’ and ‘Markham Buttons,’ which seem to be familiar locations or references from their past… There was a rocky sound at some point, perhaps indicating a bumpy road or an issue with the car.”

None of these things happened. At least, not as written. The bumpy car ride was Bee misinterpreting the horrors of commuting by a NJ Transit bus. Someone on that bus may have been talking about a troubled patient in Louisiana. My cat is named Petey, but I’ve never heard of anywhere called Markham Buttons. Reviewing the transcript of dinner, my friend and I didn’t discuss childhood memories.

Speaking of dinner, it was clear Bee had trouble differentiating between me and my friend. It also struggled telling us apart from our waiter. I tried labeling speakers but that got old fast.

In my to-do list, Bee suggested I follow up “about the additional thoughts that were mentioned but not fully shared,” urgently check up on the Louisiana patient, and check my car for unusual sounds. Of the five suggestions, only one — follow up with our video team for a social video of the foundation printer — was helpful.

I compared Bee’s version of my day with my diary entry. I wrote about trying Paddington Bear-themed marmalade sandwiches in our office kitchen. (Not a fan. I did, however, note that the strawberry-flavored shortbread cookie was excellent.) I wrote several paragraphs about a sensitive text conversation I had with a friend. Bee never picked up these moments because memorable things aren’t always spoken aloud.

It made me wonder: in a hypothetical future where everyone has a Bee, do unspoken memories simply not exist?

After wearing Bee for two weeks, I noticed my behavior started to change. On day three, after a workout and latte, I committed bathroom crimes. Unthinking, I cracked a joke about my digestive sin. According to the Bee transcript, I said, “Shit! This thing is listening to me!”

Later that day, I met with my editor. Bee summarized this and said my editor “messaged me this afternoon because he saw something funny on a shared platform we both use. Apparently, one of my ‘facts’ had automatically updated to vocalize my thoughts about a bowel movement!” Bee also suggested I start carrying around Lactaid again in my to-dos.

Having reviewed several Bee-generated summaries in the first two weeks, AI should learn to butt out of conversations about death, sex, and bowel movements. Life is hard enough. No one needs to be humbled by AI like this.

Fake news. If this happened, I would actually die of embarrassment.

This is so rude.

I started making a point of muting Bee while commuting or in the office. The last thing I needed was Bee making up more weird things. I also wasn’t keen on violating strangers’ and coworkers’ privacy. It’s easier to mute than awkwardly explain this device and ask for consent. Most of my friends didn’t mind. They’re used to my job-related shenanigans. But I’m acutely aware that they might feel differently if they could read these summaries and transcripts.

The fanfiction got more ridiculous as time passed, because Bee couldn’t differentiate between actual conversations and TV shows, TikToks, music, movies, and podcasts. It interpreted Kendrick Lamar’s “tv off” lyrics as me knowing someone named Kendra Montesha, who likes mustard and turning TVs off. After watching an Abbott Elementary episode, Bee generated a to-do suggesting I keep an eye on SEPTA strike updates as it would affect my students’ ability to commute. Obviously, I’m not a public school teacher in Philadelphia.

Bee AI pin clipped to a person’s collar, on top of a green, ribbed sweater.

When muted, Bee displays a red light — highly confusing to a table of journalists at a company happy hour.

Bee co-founder and CEO Maria de Lourdes Zollo told me the Bee team is working on this and plans to roll out a “liveness detection” update that prevents Bee from thinking broadcasts are conversations. In the meantime, I used headphones or muted Bee during TV shows.

By the end of week two, I was Pavlov’ed. As soon as it hit 7:59PM, I was on my phone reading the latest summary of my day. Forget season eight of Love is Blind. Fact-checking Bee was my new nightly entertainment.

Sometimes the night’s episode was a comedy. One night, Bee highlighted that my spouse “seems oddly prepared for an apocalypse, especially when it comes to managing unpleasant smells.” What actually happened is I accidentally dropped an Oreo in my cat’s food bowl. We debated what I should do. I cited the three-second rule. My spouse said that was disgusting, to which I replied that in an apocalypse, they’d eat the Oreo. They retorted they’d rather disinfect the Oreo with a heat gun.

Screenshot of Bee app conversation summary. It reads “Victoria instructed Mustard to turn off the TV, reminding them both to avoid getting sick again and mentioning leftover charcuterie.”

Kendrick Lamar lyrics are too powerful for Bee.
Screenshot: Bee app

Other nights, the episode was dystopian horror. Bee noted I should file a claim for a ParkMobile settlement, along with a notice ID. I googled the lawsuit — it’s an actual thing. I’ve scoured all four of my inboxes but found no such email. Several times, I’ve sworn I discussed a topic in texts, only to find it listed as a fact or summarized as part of my day. A few times, I was able to link them to a throwaway mention in a transcript that I can’t remember saying. I grew unsettled by how much Bee could glean from an offhand comment.

I no longer spoke as freely as I used to.

This was the week where Bee sent me spiraling.

Fact-checking Bee turned into an interrogation of my memories. Didn’t I say I disliked weisswurst at a happy hour with colleagues? I muted Bee that entire time. How, then, did it generate the fact that I don’t like German sausages? Did I forget another conversation where this came up?

Screencap from Bee app describing a movie senior reviewer Victoria Song watched. It reads “Movies - You’ve watched ”Only Lovers Left Alive” though you found it only moderately enjoyable.

I never had this conversation. I actually love Jim Jarmusch films, especially this one.
Screenshot: Bee

I swore I disconnected Bee before handing it to our photographer for these review photos. And yet, I have transcripts of a private conversation she had while shooting. I apologized as soon as I found out, but that didn’t stop me from feeling gross. This wasn’t the first or the last time I had this disconnection issue. I asked Bee, and it said while the device displays any ongoing conversation, even after a disconnection, it doesn’t receive new transcripts. I have no reason to believe Bee is lying. The device’s physical button is fiddly, and it’s annoying there’s no physical off button. Regardless, I felt like I couldn’t trust myself.

This was also the week where I started engaging with Bee’s chatbot. You can ask things like, “How is my work-life balance this week?” or “Tell me about my relationship with my spouse over the past month.” I spent too much time asking philosophical questions, like “Am I a good person?” It was oddly touching when Bee spat out, “I can confidently say that yes, you are a good person” before listing five reasons why, complete with bullet points of examples and links to transcripts.

1/5

It is oddly touching for an AI to gas you up like this, complete with links to transcripts.

More sobering was asking it about my moods over the past month. Bee said I’ve experienced a period of “significant stress balanced with moments of accomplishment and joy.” When asked to summarize the themes of my life, it detailed how I’ve been mediating a tense family dispute. That’s when I remembered this device heard me cry on the phone while fighting with a cousin. Reading Bee’s analysis, my vulnerable moments no longer felt fully mine.

Zollo assured me that Bee takes privacy seriously. Audio is processed in real time on the cloud but not saved. Data is encrypted in transfer and at rest. Conversations can be deleted at any time. Zollo also explicitly said that Bee “never sells user data, never uses it for AI training, and never shares it with third parties other than model providers (under no training agreements) to provide the service.” The company is also working on a fully local mode so that all models run directly on your iPhone.

Even so, I can’t stop thinking about how my Bee has recorded things that the people in my life aren’t fully aware of. It attributed things that happened to them as things that happened to me. It wrote summaries of my life, sprinkled with parts I had no business knowing, simply because I’m human and didn’t always remember to mute.

Bee isn’t a unique idea. The Plaud NotePin, Friend, and Omi all promise to do similar tasks. Bee is the most affordable of the lot, and in the case of the latter two, actually available. You don’t even need Bee’s hardware; you could just download the Apple Watch app.

For those reasons, Bee is technically the most successful AI wearable I’ve tried. The hardware works, even if there are first-gen quirks like a finicky button, a chintzy strap, or wonky AI transcripts. (I mean, it’s AI.) Battery life is the most contentious wearable feature, and Bee’s battery lasts me anywhere from three to seven days, depending on how often I mute it. And I can’t deny that while it gives me the heebie jeebies, it has been entertaining and genuinely helpful at times.

Side angle of Bee wearable surrounded by yellow objects, including a yellow alarm clock, yellow rotary phone, rubber duck, and bicycle. To its lower left, you can see the black pin attachment.

My spouse says they hate Bee. “It’s not useful enough given how much it violates my privacy.”

But having lived with Bee, I’m not sold on AI doubling as your memory. Sure, it was convenient to get summaries of work meetings. That felt appropriate. But it’s the other moments in life — the sensitive and fraught ones — where using Bee felt more like voyeurism.

Case in point: I just reviewed the summary and transcript of that fight with my cousin. Did it help me remember why I was angry? Yes. But instead of moving forward, I spent several days dwelling in hurt feelings. In the end, I had to delete the conversation so I could forgive. Sometimes, being human means knowing when to forget. I don’t trust an AI to do that yet.

Every smart device now requires you to agree to a series of terms and conditions before you can use it — contracts that no one actually reads. It’s impossible for us to read and analyze every single one of these agreements. But we started counting exactly how many times you have to hit “agree” to use devices when we review them, since these are agreements most people don’t read and definitely can’t negotiate.

To use Bee, you must pair it with an iPhone. That includes the phone’s Terms of Service, privacy policy, and any other permissions you grant. Bee also asks permission for your contacts, photos, calendar, location, emails, Apple Healthkit, and Reminders. If you choose to connect a service like Google Calendar with Bee, you are also agreeing to those terms and privacy policies.

By setting up Bee, you’re agreeing to:

Final tally: two mandatory agreements and several optional permissions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button