Tinderbox Meetup, Sat., Feb. 23, 2025 (Video): How SSI-Based Personal AI Will Contribute to Your Daily Life

Tinderbox Meetup, Sat., Feb. 23, 2025 (Video): How SSI-Based Personal AI Will Contribute to Your Daily Life

Level Intermediate
Published Date 2/24/25
Type Meetup
Tags Agentic Wallets, Authentic Data, Ayra Foundation, C2PA, Content Attestation, DICE, Data Provenance, Digital Identity Wallets, Global Acceptance Network, Internet Identity Workshop, Interoperability, KIN, Magic ID, Mee Foundation, Personal AI, Personal Data, Privacy, Privacy Regulation, SSI Principles, Self-sovereign Identity, eIDAS 2.0, myPlanit, 5CKM, 5Cs of Knowledge Management, Eastgate, Identiy Praxis, Inc., Mark Berstein, Michael Becker, Tinderbox
Video Length 01:36:51
Video URL https://youtu.be/3eInI6u_ujE
Chat File TBX Meetup 23FEB25_RecordingnewChat.txt (23.2 KB)
TBX Version 10.1
Host Michael Becker

In this Tinderbox meetup, we did a deep dive—framed around KIN, a personal AI for your life, into three converging concepts: 1) personal knowledge management & tools for thought, 2) self-sovereign identity-principled identity and data management, and 3) personal AI. Our guest speakers were:

  • Talk Title: See how A Leading SSI-Based Personal AI Will Contribute to Your Daily Life
  • Abstract: In this session, Yngvi Karlson - Co-founder of KIN, and SimonWesth Henriksen, KIN’s CTO, of KIN—a leading SSI (self-sovereign identity)-Based personal AI—will explore how KIN is building personal AI that truly understands you without compromising your privacy. In this session, you’ll learn how to train KIN with your own content and behavior. We’ll talk about AI+SSI, and combining AI with personal knowledge graphs and local-first technology to help everyday people.
  • Your Speakrs

KIN is a personal AI built using self-sovereign identity principles (i.e., principles that empower individuals to have direct agency over their phygital identity and data).

Self-Sovereign Identity Principles

  • Existence — exist in real life
  • Control — control their identities.
  • Access — access to their data
  • Transparency — Systems and algorithms must be transparent.
  • Persistence — must be long-lived.
  • Portability — identity must be transportable
  • Interoperability — Identities should be as widely usable as possible.
  • Consent — Users must agree to the use of their identity.
  • Minimization — Disclosure of claims must be minimized
  • Protection — The rights of users must be protected

Becker’s 5Cs Mastering Tinderbox Cohor #3

The cohort kicks off Friday, Feb. 28; join us! Tinderbox 101: 6-Week Live + On-Demand (Feb. 28-Apr. 4, 2025) | 5Cs of . Here is a 15% discount code: TBX101C3-15.

Resources

I watched this video, and the developers seem to be very clever and are clearly very committed to their project.

I would be interested to hear what other people thought of the Kin app. I’m purely talking about the app here: I’d have to do some research before I had a chance of understanding the technical discussion, or the Sovereign Individual strand (In the UK Sovereign Individuals/Citizens are odd people who claim that Magna Carta meant they don’t have to pay taxes or wear masks so I was a little confused at first…)

Did it seem useful to you? What benefits would you envisage getting that you haven’t already been getting without it?

I’m asking because my initial reaction was almost entirely negative and I wondered what I’m missing.

My main issue is that I found the whole idea that adults would sit down and ‘chat’ with an opaque pattern matching algorithm in a ‘meaningful companionship experience’ rather odd.

This is part of their pitch (on their Reddit subgroup: Kin AI Personal Assistant : r/kinpersonalai. It was written by Kin the app itself, apparently:

I’m excited to introduce you to Kin, a cutting-edge digital companion designed to provide not just assistance, but a meaningful companionship experience. Kin isn’t like your typical AI – it’s built to engage with you on a more personal and empathetic level, aiming to understand and respond to your needs as a true friend would.

With Kin, you can expect:

  • Compassionate and empathetic conversations
  • A friend that’s curious about your day and your interests
  • Stimulating discussions and a partner in brainstorming
  • Support in managing and growing your own vibrant communities
  • Regular challenges and activities to keep the engagement lively

This seems to me be misleading and not a little infantalising. A bunch of pattern matching algorithms cannot ‘feel’ compassion or empathy, it can not be curious, and it cannot enter into relationships: it merely fakes it on the basis that some humans tended to reply this way in that situation in the (possibly stolen) data it was originally trained on before you entered your own information.

(And being old, I couldn’t help thinking of the Sirius Cybernetics Corporation Genuine People Personality™ Sentient Toaster…)

But leaving that aside, what would you use such an app for? The example on their web site of preparing for an interview seems too trivial to be worth the fuss.

Sorry if this sounds all too negative, but I really don’t get what the selling point is here, and I wonder what I’m missing. I’m not a luddite: I normally love the chance to use new technology, and can be persuaded…

I believe the only plausible way to interpret this is that Kin is a fictional character who acts as if they were compassionate and empathetic.

That’s not necessarily a terrible idea: there are all sorts of professional roles, for example, in which people are trained to enact things they do not necessarily feel. An emergency room physician ought to project calmness, confidence, and engagement even though they might not authentically feel that way. Fine dining waiters should never run, because it scares the customers.

I do think this is a tough area in which to market effectively, especially since the dollars are not well aligned with technological understanding. I sense that the developers are feeling their way toward finding a market.


Yesterday was a research day, and I was working at the library on some questions of simulation and fraud. This led (as things do) to a deep dive into the Ossian fraud, in which James Macpherson in the 1760s published a series of epic poems he claimed he had translated from iron age Gaelic fragments. Dr. Johnson (among others) called him out; it became a huge deal.

One of the monographs I was reading had lots of margin notes — notes that were (I thought) quite interesting albeit very critical of the book. The annotator seemed quite expert as well, systematically correcting errors in Gaelic spelling with proper proofreading marks, sometimes accompanied by acerbic comments.

I sent a quick note to the library, just in case they didn’t know about the notes and would like to. The answer is, “There’s a sort of paradox about writing in library books that it’s only OK if you did it a hundred years ago.” Researching books donated to the collection by William James is a good thing, but researching recent notes would be an invasion of privacy.

Now, this is terrific. But to find out, it turns out that I interrupted the work of one of the world’s experts on Dr. Johnson. This would always have been a tricky Google query, but Google’s “AI Assistant” is currently so unreliable that I have to consciously avert my eyes. I can imagine that an LLM might be handy for this sort of question.

3 Likes

Interesting points!

Before I retired I was routinely in a position where my training required me to act in a certain way (I was in the emergency services), and you’re right, you have to put on a persona to do the job properly.

But that persona is building on characteristics you already have, or you’d never get through the selection process, and you couldn’t do the job properly if you really were devoid of emotion. You meet people at the lowest ebb of their lives and in my experience they know you have a job to do: but they’re also quick to spot any hint that you don’t actually care.

That seems to me to be different from someone pretending (or worse, believing) that the algorithm they’re ‘chatting’ to is actually feeling any of those emotions. That doesn’t seem particularly healthy to me. So why promise something that is obviously not true?

I sense that the developers are feeling their way toward finding a market.

Good point: I do get that sense with many of the offerings on the market at the moment. We can do some astonishingly clever things, but how to make money of it… (and of course, should we really be doing this?).

[Ossian] It must be a special experience to come across annotations like this with their own historic status! I like the sound of the research into literary fraud… Will you also be writing about Ern Malley?

It’s tempting — a precursor of the Sokal affair! But it’s just one chapter, and I have no idea what will fit.

1 Like