First, thanks for the time you spend replying to me :) it’s a big pleasure on my side. Then, on the same note, I have this theory that what we call identity is how our personality is reflected back to us from the others around.
Identity, as the internal representation of an external personality.
The recursiveness inside our heads makes us have ourselves perceiving ourselves, this creating the ego. We are constantly fine tuning against dissonance between ego and identity.
So I think we’re hardwired to handle external representations, to point, I mean your representation of your observations is as useful to me as any of my internalized perceptions of the other’s representation on me.
AIs need to have this built in, we don’t want sociopathic neural networks. I hope. Between the AIs of the future the networking protocols of today won’t do much, as they need to actually communicate, rather than inform, if they’ll ever go beyond dumb assistants.
Maybe the sharing formats will appear in the process of having AIs see themselves as “ego” and the representations of interacting humans about them as “identities”. In writing ego and identity functions we’ll need common formats for data, or otherwise we’ll have a hard time getting AIs to talk to each other.
:D I am not sure if this dialog should contain longer answers than this, so thanks again, yours,