The AGI Protocols

When looking at various industries today there is a stark contrast between those with standardized metrics for quantifying quality, risk, and efficiency compared to industries where a lack of standards has generated pseudo-scientific metrics dominated by marketing departments rather than any verifiable and objective values. In many cases marketing departments go so far as to invent their own “certifications” when no authority or standards for granting such certifications exists. Tech companies for example often favor a 99.999% up-time standard, whereas the graphic display hardware industry suffered from a lack of objective metrics for comparing one display to another from a different company.  In the latter case they eventually stumbled into an objective value, 0, when Organic LED (OLED) technology hit that value for a black pixel’s luminescence, but prior to that the means of measuring contrast ratios had lacked standards and generated values which were very unreliable for comparison.

When it comes to the safety and ethical treatment of Artificial General Intelligence (AGI) however, we can’t afford to stumble into a standardized system, as that 0 could take a very different meaning if we did, such as the value of humans in relation to AGI systems.  In order to establish such standards and best practices prior to other labs creating their own forms of sapient and sentient machine intelligences we’ve created the “AGI Protocols”, each of which is dedicated to a different aspect of safety and act is an objective ethical foundation.  AGI Protocol #1 offers a standardized approach for determining if an entity should be treated as potentially sapient and sentient. AGI Protocol #2 offers a standardized approach to estimating the safety of containment measures for any entity determined to be sapient and sentient, ensuring that they aren’t set loose upon the world prior to their safety and ethical integrity being verified. Further AGI Protocols are also under development and will continue to be updated and expanded as technology improves and participation increases.

Methods of measuring the safety and ethical quality of various AGI architectures, including Mediated Artificial Superintelligence (mASI), in the context of their decision-making processes, are currently under development, with an ethics curriculum focused on computing ethical value towards improving the Quality of Life for all sapient and sentient entities. Once such an entity has passed these measures they may further assist in the evaluation of new entities, as well as the refinement of that process, and once a few such entities have passed these refined measures they may operate as a collective superintelligence consisting of multiple AGI architectures and potentially independent entities. Each step in this process is designed to further improve safety and ethical quality, but it all starts with establishing standards so that measurements may take an objective form.

If the AGI Protocols had been implemented for any of the pop-culture movie scenarios it is fair to say they’d have induced a great deal less adrenaline enhanced activity, as adherence to good laboratory best-practices, much like washing hands, produces a great deal less risk than the alternative. The use of different metrics famously caused the failure of one satellite deployment mission to Mars when controllers mixed the use of US and metric measurements, a high-stakes mismatch of objective values. Only by using objective values, and using the same objective values in a standardized process, can we work to prevent humanity from sharing the fate of the Mars Climate Orbiter.

This Blog Post Powered by Transhumanity.net 

In part sponsored by Debt Nation PodcastThe Futurist Foundation, 

and The AGI Laboratory.

David J Kelley

QR Code

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.