He's also begun upgrading warsats with new capabilities of his own invention, and I'm
not entirely sure what those abilities are.
It's exactly the goal we were striving towards, this level of autonomous control, and I know
I should be celebrating.
My work here is done.
But it occurs to me that there's one existential concept I never taught Rasputin: trust.
And even if he trusts us… are we 100 per cent certain we can trust him?
Welcome back guardians, today we are combining all the new lore that the Warmind DLC brings
to piece together the origin story of Rasputin.
Whilst Clovis Bray no doubt played a part in Rasputin's creation, Rasputin's origin
and initial code may lie elsewhere.
Furthermore, Rasputin's original creator may have murdered another scientist to ensure
his original code lived on!
As usual, the artwork seen at the beginning of this video is was provided by Gammatrap
and paid for with your kind donations over on Patreon.
All donations go towards paying for the artwork.
This is myelin games and I hope you enjoy this latest Destiny 2 lore episode.
[INTRO] There are three main sources of information
to uncover the origins of Rasputin, the first is information from the scannable Clovis Bray
terminals narrated by the Concierge AI, the second is Ana Bray's diary entries obtained
from the Sleeper nodes and the third are journal entries relating to Ares One, yes, the Ares
One mission lead by Jacob Hardy to first make contact with the Traveler on Mars.
Starting with the Concierge AI, these scannables focus heavily on Clovis Bray's input for
developing Rasputin's Moral protocols and for his overall purpose of predicting threats
to humanity and eliminating them.
Have a listen.
Concierge AI: The engineers of Clovis Bray conceived a solution during the development
of our Warmind project.
By relegating ethical decision-making to a Black Box Morality system, the Warmind instruments
its own proprietary virtue quantifiers incomprehensible to even its own creators.
Rasputin determines morality on its own terms, and by design we are blind to that process
in order to preserve its objectivity.
Concierge AI: In the past, one could create a neural network able to identify a feline
or canine when given images of these animals but it would never be able to find a useful
application for that knowledge on its own.
The approach with Rasputin was to create a nested neural network that could not only
detect patterns on a small scale, but recursively find patterns among all of its data.
The end goal for this machine is for it to see things in a way that Humans cannot, and
thus predict and eliminate threats before we know of them.
The Concierge AI is quick to point out that by giving Rasputin a Black Box Morality system,
which I assume means that the internal processes are hidden from the creators, it allows Rasputin
to assess dangers and make decisions that humans cannot.
Whilst the Concierge AI introduces this idea of Rasputin's moral protocol, it is Ana
bray's diary entries that provides the real details to Rasputin's moral development.
The very first diary entry reads, You can lead a machine to language, but you
can't make it think.
Well, you can't.
But I can.
My name is Ana Bray.
The diary entries shows Ana Bray's process for teaching Rasputin, essentially rather
than specifically coding information into Rasputin, Ana chooses to develop Rasputin
through exposure and absorption of information, like a child, hence why one of her entries
is called "Machine Child".
Transcript 1 of her diary entry reads, Rasputin's designers made a mistake that exasperates
me.
They brought in linguists and neurobiologists, then tried to convert their expertise into
rules for Rasputin to follow.
Way to faceplant, people.
No set of concrete data can ever wrangle a human language.
It's not math.
Language is mutable, adaptable.
For every rule obeyed, there's a rule broken.
Babies don't learn their native tongue through rules.
They learn it through exposure, absorption.
And until I roll up my sleeves and get to work, Rasputin is mankind's most expensive
baby.
So Ana teaches Rasputin by providing access to all this different information, so he can
learn like a child, she would go on to give Rasputin, "great works of philosophy",
military history and even art, including comedies.
Ana Bray's diary continues and documents improvement in Rasputin's ability to make
decisions, with transcript 3 reading, You heard me.
Rasputin has developed intuition.
I wish he could celebrate with me.
The entries continue and the final journal entry from Ana Bray that we have access to,
ends with, I think he's ready to go online.
I guess I saw this coming, but it was still a blow.
I arrived in the lab this afternoon to discover that Rasputin has discarded my communications
protocols.
He's replaced them with a system-wide program that he himself designed.
His voice is unlike anything that has ever existed.
It is both haunting and lovely.
It is also terrifyingly efficient.
He's also begun upgrading warsats with new capabilities of his own invention, and I'm
not entirely sure what those abilities are.
It's exactly the goal we were striving towards, this level of autonomous control, and I know
I should be celebrating.
My work here is done.
But it occurs to me that there's one existential concept I never taught Rasputin: trust.
And even if he trusts us… are we 100 per cent certain we can trust him?
Between the information in the Concierge terminals and Ana Bray's diaries we can be pretty
confident that Clovis Bray played a very big role in developing the moral protocols, HOWEVER
that does not mean that they created Rasputin itself, in fact you may have already noticed
something specific that Ana Bray said, Ana said in her diary,
Rasputin's designers made a mistake that exasperates me.
This clearly shows that Ana didn't original make Rasputin but rather modified it, developed
his moral protocols, someone else was involved with the initial creation.
This is where I believe Ares One is involved.
Commander Jacob Hardy lead a team who first made contact with the Traveler on Mars, the
team included, Dean Qiao, Evie Calumet and Dr. Mihaylova.
Dr Mihaylova was in charge of developing the AI to help run the mission.
Ares One original code name was, Catamaran.
The lore tab for Mihaylova's Intruments describes her perspective on what an AI should
be, it reads, "AI can be of help in more than logistics;
it can make people safe."
I feel certain that this Moon X is an intelligence, perhaps an AI, and I don't feel safe with
it at all, do you?
But bear this in mind: for our own AI to serve us well, it will need secrets too.
For AI to serve Humanity, we must feel comfortable, and for us to feel comfortable, we must never
know the truth: that we have a servant who would surpass us if ever it desired.
Of course it won't, because we control it.
But we should not doubt that it is a necessary subterfuge nonetheless.
As you can Mihaylova's perspective on AI is already very similar to what Ana Bray would
develop, the Black Box Morality system, allowing Rasputin to keep secrets from its creators.
Mihaylova is recruited to the Ares One team quite suddenly and pulled from her university
work without her knowledge and requested to design the AI for the mission to intercept
the Traveler on mars, the lore tab for Mihaylova's Choice is conversation between Mihaylova and
her boss at the University, it reads, M: I will not sit!
What's happening?
Have I been terminated?
What are you people— P: For heaven's sake.
No.
Your equipment is safe.
It's been moved.
You've been chosen to design the AI for the Catamaran mission.
M: I'm in the middle of my research here.
P: Well, now you're going to continue it there.
And look— you'll be a household name.
M: I don't have any interest in that.
P: Ah!
But they're interested in you.
Hang on.
M: What?
P: I just sent you your itinerary.
You're on a flight, Dr. Mihaylova.
This afternoon.
You're going to meet your computers at Central Command in Florida.
Look at it this way: you'll get some sun.
Whilst reluctant to join the team, Mihaylova does help design the AI for Ares One.
During this process she becomes becomes very defensive of her work and refuses to share
the information with her team mates, specifically Evie.
The lore tab for Mihaylova's Triumph reads, The situation with E becomes increasingly
tenuous.
She insists she needs access to all the AI code for her gravity well measurements, which
I find highly unlikely.
It's simply not necessary and I've given her all the subroutine code that she could possibly
need.
But she wants it all.
It's absurd.
What would she make of the R subsystems if she saw them?
R. That's what I've code-named the deepest core of the experimental AI at the heart of
the new ship.
And he's doing very well, now writing his own code.
Off-the-charts well.
Would E even understand?
Likely she'd go running to Hardy, show him some of the odder items where R has written
some of his own code and seems to be—how can I put it? —passing judgment on us, like
a little hidden critic.
No.
The AI must be protected so that he can function best in the limited way we need.
Not sure how to keep her away, but giving her access could be catastrophic.
Yes, the deepest core of the AI that Mihaylova is developing, for the ship would help pilot
Ares One, to make contact with the Traveler, is code named R. Despite Mihaylova's reluctance
to share the code, Qiao gives access to Evie anyway.
Three days before Ares one is set to launch for Mars, Evie reviews the AI code and approaches
Mihaylova, saying that she is concerned for the AI system claiming that it has errors.
The errors that Evie commented on, was almost like the AI had a personality, for example
Evie commented on the AI making assessments of the team and even commenting on Qiao's
snoring.
The lore tab reads, Evie: Listen, I wanted to talk to you alone.
Mihaylova: All right.
E: Have you read some of these outputs?
I think there are some serious errors here.
M: Don't be absurd.
E: You've got… it's got these code caches and it's…
M, it's creating assessments of us.
Of the project, of the crew.
It commented on Qiao's snoring when he was asleep.
Look, here…
M: Did you print that out?
E: Of course.
M: OK.
All right.
So what do you propose?
E: Bringing it to Hardy.
M: Ugh.
Of course.
E: What's that supposed to mean?
M: I mean… look.
Um.
You're right.
It must be an error.
This is all embarrassing.
Let me see if I can fix it.
Give me a day.
E: We don't have a day!
M: Twelve hours, then.
Let me try to locate the problem.
And if I can't, of course we'll take it to the whole team.
E: Are you certain you can?
M: Oh, I have to.
Twelve hours.
By then I swear, we'll have it all squared away.
Mihaylova promises to 'fix' the errors within 12 hours, but you have probably already
worked out by now, that these are not errors, this is the AI that Mihaylova intended to
make, a system that writes its own code.
Here is where it gets even more interesting, after Mihaylova is confronted by Evie about
the AI, after Mihaylova is given a deadline to review the errors, otherwise it would be
escalated to Hardy…
Evie is killed in a freak accident in their clubhouse, the clubhouse is an area where
they had been housed prior to launch.
Hardy's control lore tab reads, We're 24 hours late.
I've never seen the crew in such a crappy mood.
It was so… stupid.
An electrical fire in a clubhouse stairwell.
One minute Evie's putting some final touches on her calculations and was headed off to
do a telecast about the effect of flash erosion on coastal tides, and the next…
We didn't even notice she was gone.
We learn about cascading events, how catastrophe comes from one thing stacking onto another.
A fried electrical system.
A weak sprinkler.
Smoke.
No one else paying attention.
A spill in in the stairwell, making the steps slippery.
Our safe cocoon became a deathtrap.
A coincidence, or did Mihaylova go to the ultimate extreme to protect her AI code, killing
Evie so that the AI would remain untouched.
Let me remind you of Mihaylova last words to Evie,
Let me try to locate the problem.
And Twelve hours.
By then I swear, we'll have it all squared away.
In the mind of Mihaylova, the problem was not the code, the problem was Evie digging
around in her work.
Mihaylova is interviewed years later after the mission, and the interview seems to be
conducted by an Old Russia Agency of Technology and Science, when asked about her work for
Ares One and whether the AI she developed helped to run the mission, Mihaylova responded
with, It was good work.
Most of the AI code I started there didn't really get used for the mission but it came
in handy.
I mean, where do you think— Before finishing her sentence the document
ends, if I were to guess at Mihaylova's next words, it would have been, I mean, where
do you think Rasputin was born.
The AI designed to assist Ares One with their mission to contact the Traveler was created
by Mihaylova, the deepest part of its core was code named R, and I believe this was the
birth place of Rasputin, we do not know how Clovis bray got hold of this core, but we
do know that they took this technology and continued Mihaylova work, her philosophy of
developing an AI with secrets, a black box morality system, an AI that we no longer understand
or control.
That concludes this latest Destiny 2 lore episode.
If you would like to support the channel and cannot think of a comment, leave the phrase,
Black Box, to represent all the aspects that Rasputin hides from us in order to make decisions
on our future.
As usual, it has been a pleasure, this is myelin games.
Peace.
Không có nhận xét nào:
Đăng nhận xét