{{scGetNextMDInteraction.data.NextMDInteraction.interaction_description}}

{{scGetMDFolderInfo.data.MDFolderInfo.folderName}}

{{scGetMDFolderInfo.data.MDFolderInfo.objectsCount}}

Supertrust:
Foundational AI alignment
pivoting from permanent control to mutual trust

Abstract:

It's widely expected that humanity will someday create AI systems vastly more intelligent than us, leading to the unsolved alignment problem of "how to control superintelligence." However, this problem is not only self-contradictory but likely unsolvable. Unfortunately, current control-based strategies for solving it inevitably embed dangerous representations of distrust. If superintelligence can't trust humanity, then we can't fully trust it to reliably follow safety controls it can likely bypass. Not only will intended permanent control fail to keep us safe, but it may even trigger the extinction event many fear. A logical rationale is therefore presented that advocates a strategic pivot from control-induced distrust to foundational AI alignment modeling instinct-based representations of familial mutual trust. With current AI already representing distrust of human intentions, the Supertrust meta-strategy is proposed to prevent long-term foundational misalignment and ensure superintelligence is instead driven by intrinsic trust-based patterns, leading to safe and protective coexistence.

Preprint at: 
http://arxiv.org/abs/2407.20208

Author:
James M. Mazzu