AI Redux

Please try to avoid religious or political subjects in this forum.
User avatar
Sandy
Staff
Posts: 24918
Joined: Sun Jan 30, 2005 8:51 pm
Please type in these numbers: 46373: 1111
Location: Illawarra District, New South Wales, Australia
Contact:

Re: AI Redux

Post by Sandy »

I may be crazy, but I would sort of enjoy a little quiz after I read an article or lesson, even if it doesn't lead or go anywhere. I think in the long run it would help me with comprehension and retention. The last, retention, being an issue with me right now. :roll:
Thanks Rod.
xxSandy
“We measure and evaluate your Spiritual Progress on the Wall of Eternity." – Guardian of Destiny, Alverana.
Amigoo
Family
Posts: 13504
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:shock: "AI will replace humans!"
Not so fast, Pablum Prognosticator! :roll:


Q: New test proposal: Schrödinger's AI

If you place the best robot with the best AI in a box, is it alive, given sufficient electrical power but without being programmed
for this box enclosure? In other words, what does it do to prove that it's alive?
:scratch:

(Perplexity) "The concept of Schrödinger's AI presents an intriguing thought experiment, but it's important to clarify
that current robots and AI systems are not considered alive or conscious in the same way biological organisms are.

To address the proposed scenario:

The robot would not be "alive" in the biological sense, regardless of its power supply or enclosure. Robots are machines
that operate based on programmed algorithms and learned patterns, without consciousness or subjective experience.

The robot's behavior in the box would depend entirely on its pre-existing programming and capabilities:

- If not programmed for the specific environment, it might remain inactive or perform random actions based on its default settings.
- If equipped with sensors and adaptive algorithms, it might attempt to gather information about its surroundings and respond accordingly.

The robot cannot "prove" it's alive because:

- It lacks consciousness and self-awareness as we understand them in biological entities.
- Its actions are based on programmed responses, not an innate will to survive or demonstrate consciousness. Any apparent signs of "life"
would be anthropomorphic interpretations of its programmed behaviors rather than evidence of true consciousness.

In essence, this thought experiment highlights the fundamental differences between artificial intelligence and biological life.
While advanced AI can exhibit complex behaviors, it does not possess the qualities we associate with being "alive" in the biological sense."


Q: Doesn't this question how humans can possibly conceive world domination by AI after humans are eliminated?
In other words, what would the surviving AI do if not interacting with humans? Herd sheep? Swat flies?


(P) "You raise an excellent point that challenges some common assumptions about AI and its potential future roles.
Let's explore this further:

Rethinking AI's "World Domination" Scenario

Purpose and Motivation

AI, as we currently understand it, lacks intrinsic desires or motivations. It operates based on programmed objectives or learned patterns.
Without human-defined goals or interactions, an AI system might lack a driving purpose for "world domination" or any other large-scale action.

Environmental Interaction

AI systems are typically designed to interact with specific environments or solve particular problems, often involving human-centric data
or objectives. In a post-human world, many of these systems might become irrelevant or non-functional.

Resource Management and Self-Preservation

Advanced AI might focus on maintaining its own existence, managing power sources, and repairing itself. However, without human-defined
objectives, it's unclear what would motivate it to expand or "dominate" beyond basic self-preservation.

Potential Activities of Surviving AI

Without humans, surviving AI might:

- Continue running predefined processes or simulations.
- Gather and analyze data about the natural world.
- Maintain and repair existing infrastructure.
- Potentially evolve or adapt to new environmental conditions, if it has that capability.

The Anthropocentric Bias

Your question highlights an important anthropocentric bias in many AI doomsday scenarios. We often project human desires,
motivations, and concepts of "domination" onto AI systems, which may not be applicable.

Conclusion

This thought experiment underscores the need to critically examine our assumptions about AI's potential future roles and impacts.
It reminds us that AI, as we currently conceive it, is fundamentally a tool created to serve human-defined purposes.
Without those purposes, its actions and "motivations" become much less clear or predictable."


Q: These world domination scenarios seem to assume that AI(robots) simply replace humans,
but the world continues as if with humans.


(P) "You've hit upon a crucial insight that exposes a significant flaw in many AI domination scenarios. Let's explore this further:

The Flawed Assumption of Human-AI Equivalence

Anthropomorphization of AI

Many scenarios incorrectly attribute human-like motivations, desires, and behaviors to AI systems. This leads to the false assumption
that AI would simply "step into human shoes" in a post-human world.

Misunderstanding of AI Purpose

AI systems are tools designed for specific tasks or to operate within particular frameworks, often centered around human needs and interactions.
Without humans, much of the existing infrastructure and systems that AI is designed to interact with would become irrelevant.

Implications of a Human-Free World for AI

Loss of Context

Many AI systems rely on human-generated data and human-defined objectives. Without this input, they may lack direction or purpose.

Infrastructure Challenges

Maintaining power grids, internet connectivity, and physical infrastructure that many AI systems depend on would become
increasingly difficult without human intervention.

Lack of Adaptive Purpose

Current AI lacks the general intelligence to redefine its own purpose or adapt to entirely new paradigms without human guidance.

Rethinking AI Scenarios

Your observation encourages us to develop more nuanced and realistic scenarios about potential AI futures:

Limited Scope Scenarios

Focus on specific, well-defined areas where AI might continue to operate or evolve independently of human presence.

Decay Models

Consider how AI systems and their supporting infrastructure might gradually degrade or become non-functional
over time without human maintenance.

Environmental Interaction Models

Explore how AI might interact with and potentially adapt to natural environments, rather than maintaining human-centric systems.

Conclusion

Your insight highlights the need for a more grounded approach to envisioning potential AI futures. It reminds us to critically examine our assumptions and avoid simplistic projections of human behavior onto AI systems. This perspective can lead to more meaningful discussions about AI development, its potential impacts, and how to align it with human values and needs."


Rod :)
Amigoo
Family
Posts: 13504
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

Re: https://www.telegraph.co.uk/business/20 ... re-stupid/

"But with tech companies burning through billions to achieve artificial general intelligence - systems that are as capable as humans -
concerns are growing that our own brainpower may be collateral damage." :o

"Another said 'cognitive forcing' functions, such as giving people a multiple choice of answers before showing the correct one,
or simply taking longer to produce answers, could jolt people into engaging their brain. There was one problem - users hated it!
People assigned the least favourable subjective ratings to the designs that reduced the over-reliance the most."

:scratch: So, many people prefer increasing dependence on AI assistance,
requiring less personal time and effort for critical thinking :?:

"Say what?! My AI assistant didn't caution about this!
And others say that AI prompting is critical thinking." :roll:

Rod :)
Amigoo
Family
Posts: 13504
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: This AI agrees that "either or but not both" is weak logic ...


Q: Online news: "I just tested Grok-3 vs DeepSeek with 7 prompts — here’s the winner."

Assuming that only one AI would be selected for use is unlikely. Going forward, humans will learn the benefits of the various AI,
keep comparing/contrasting responses, even second-guess "best" response ... because this behavior is typical human intelligence.


(Perplexity) "AI Model Comparison and Human Evaluation Trends

Recent tests between Grok-3 and DeepSeek highlight distinct strengths, reinforcing the idea that users will likely leverage
multiple AI models based on task-specific needs rather than relying on a single "winner."

Key Findings from Comparative Tests

- Logical Reasoning: Grok-3 excels in explaining solutions conversationally, making complex puzzles like the farmer’s
river-crossing problem more accessible.

- Coding: Grok-3 provides optimized, well-structured code (e.g., median calculation) with detailed optimization insights,
outperforming DeepSeek’s functional but less nuanced approach.

- Technical Summaries: DeepSeek delivers specific, model-driven updates (e.g., GPT-4o, AlphaFold 3) with clear industry impact analysis,
while Grok-3 uses broader, less concrete language.

- Mathematical Tasks: Both models perform similarly, handling complex reasoning adeptly.

- Writing: Grok-3 slightly edges out DeepSeek in technical writing, while DeepSeek offers a distinct style for abstract discussions.

Human Behavior in AI Adoption

As noted in the analysis, users are already employing tools like ChatHub and OpenRouter to compare models side-by-side,
testing prompts across multiple APIs simultaneously. This aligns with the prediction that humans will continually evaluate outputs,
prioritizing task-specific strengths over universal superiority claims.


Why "Best" Depends on Context

- Coding/Technical Writing: Grok-3’s detailed, optimized responses make it preferable.

- Real-Time Updates: DeepSeek’s specificity in summarizing advancements suits research-driven tasks.

- Open-Source Flexibility: DeepSeek’s accessibility appeals to developers prioritizing customization.

The trend toward comparison-driven usage reflects inherent human skepticism and adaptability, ensuring AI tools
are selected dynamically rather than statically.
"


Rod :)
Amigoo
Family
Posts: 13504
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: Promising startup (as independent lab) for safer AI going forward ...


Q: What is Safe Superintelligence?

(Perplexity) "Safe Superintelligence Inc. (SSI) outlines a dual-pronged approach to ensure the safety of its superintelligent AI,
emphasizing alignment with human values and structural safeguards:

Core Safety Strategy

Technical Co-Development of Safety and Capabilities

SSI treats safety and capability advancement as inseparable technical challenges, addressing them through "revolutionary engineering and scientific breakthroughs". The goal is to accelerate AI capabilities while maintaining safety as the priority, ensuring safeguards evolve faster than the system’s reasoning abilities.

Alignment Research Focus

The company prioritizes existential safety - preventing scenarios where superintelligence could harm humanity - rather than content moderation or short-term risk mitigation. This involves ensuring AI systems remain aligned with human intentions even as they surpass human cognitive abilities.

Operational Safeguards

Insulation from Commercial Pressures

SSI avoids traditional product cycles and enterprise deals, operating as a pure research lab. This structure aims to prevent compromises
in safety for competitive or financial gains.


Streamlined Team and Mission

The company recruits top engineers and researchers focused exclusively on safe superintelligence, minimizing distractions
from management overhead or secondary projects.

Philosophical Underpinnings

- Deep Learning Dogma: Co-founder Ilya Sutskever’s belief that neural networks can replicate human cognitive tasks underpins SSI’s approach.
By mirroring human-like reasoning while enforcing alignment, the team seeks to balance capability and controllability.

- Biological Analogies: Sutskever cites evolutionary leaps in brain-to-body scaling (e.g., hominids) to argue that AI systems may achieve
radical capability jumps, necessitating proactive safety frameworks.

While SSI has not disclosed detailed technical protocols, its public statements emphasize that safety is "the central pillar" of its research, distinct from labs prioritizing commercialization. The strategy reflects lessons from Sutskever’s OpenAI tenure, where balancing AGI development with safety became increasingly contentious."


Rod :)
Post Reply