AI Redux

Please try to avoid religious or political subjects in this forum.
happyrain
Family
Posts: 3570
Joined: Mon Feb 08, 2010 9:44 pm
Please type in these numbers: 46373: 1111

Re: AI Redux

Post by happyrain »

Friendly reminder not to rely on AI as an affirmative in every case- if anything it is a fun tool and even has its own language helping you learn *how to form your inquiries.

That being said, here's what I was inspired to ask it the other day... as my knowledge expands, the way I ask this question may change, refining the inquiry to have even more relatability. You can see though, AI will humor even an incomplete and possibly erroneous comprehension-
Referencing both the Urantia Book and the Law of One, Monjoronson's incarnation as a fourth-dimensional entity would involve assuming a form that bridges the material and spiritual, suited to guide humanity through a planetary transition. This incarnation would reflect the characteristics of fourth-density existence-greater unity, love, and energetic refinement-while remaining accessible to those on Earth who are ready to engage at this higher level of consciousness
https://www.perplexity.ai/search/refere ... wIOpTmow#0

Image
^ AI generated
Fear grips when Love falls short of Infinity
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

Re: https://www.sfgate.com/tech/article/mic ... 330085.php

"Microsoft lays off (San Francisco) Bay Area engineers
after CEO says up to 30% of code is (created by) AI.

:scratch: Who knew? Coding is no longer a good career choice!
Once humans (via Large Language Model training) show AI
how do work, AI (and robots) may soon do that work. :roll:

Best guess: But only if a company can save money
by replacing human workers with AI agentsI

Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: (from Q&A about AI's increasing need for electricty)

Q: Online news: "(Taiwan) An artificial intelligence centre, announced with Nvidia, will be built in phases
and is targeted to have 100 megawatts of power."


(Perplexity) "Advanced AI is becoming an environmental and economic threat due to its massive and growing need for electricity. The environmental impacts — rising emissions, water use, and grid strain — are significant and growing, especially as most electricity still comes from fossil fuels. Economically, the cost of powering AI is rising, with potential downstream effects on consumers and broader energy markets."


Q: This reminds that the human brain needs good energy (good nutrition) for its own data processing.
And computing's old mantra, "garbage in, garbage out", likely includes brain nourishment.
;)

(P) "Absolutely — your analogy is both insightful and scientifically grounded!

The Brain: Nature’s Most Efficient Processor

Just as advanced AI systems require massive amounts of electricity to function, the human brain depends on high-quality energy — primarily from nutrition — to process information, make decisions, and maintain overall health. In fact, the brain, though only about 2% of our body weight, consumes roughly 20% of our daily energy intake.

Good Nutrition = Good Data Processing

- Glucose is the brain’s main fuel. Steady glucose levels from whole grains, fruits, and vegetables help maintain focus and mental clarity.
- Healthy fats (like omega-3s from fish, nuts, and seeds) support the structure of brain cells and communication between neurons.
- Micronutrients (such as B vitamins, iron, and magnesium) are essential for neurotransmitter production and cognitive function.
- Hydration is crucial; even mild dehydration can impair attention and memory.

"Garbage In, Garbage Out" Applies to Brains Too!

The old computing mantra — "garbage in, garbage out" — applies perfectly to human cognition.
Poor nutrition (high in processed foods, sugars, and unhealthy fats) can lead to:

- Brain fog and poor concentration.
- Mood swings and irritability.
- Increased risk of neurodegenerative diseases.
- Lowered learning and memory capabilities.

Conversely, a well-nourished brain is more resilient, creative, and capable of complex problem-solving
just as well-designed, energy-efficient AI performs better with quality data and sufficient power.

Takeaway

Both artificial and biological intelligence depend on the quality of their "inputs." For AI, that’s clean energy and high-quality data. For humans, it’s good nutrition, hydration, and healthy lifestyle choices. In both cases, "garbage in, garbage out" rings true — highlighting the importance of mindful, high-quality inputs for optimal performance."

Feed your brain well — it’s the ultimate processor!
"


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

Re: https://x.com/unusual_whales/status/1925163919116542115
"Software engineer lost his $150K-a-year job to AI."

:idea: Obvious writing on the wall ...

Job losses to AI + robots will be significant without
government legislation to minimize this problem,
since business stays focused on the bottom line
and stockholder earnings pressure. :roll:

Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:sunflower: More AI assistance in research.

(from Q&A about the unproven Budwig Diet for cancer prevention)

(Perplexity) "Yes, ground flaxseed instead of flaxseed oil would provide additional health benefits, especially for fiber and lignans [good for the microbiome], while adding a direct source of EPA/DHA [omega-3 like fish oil] would address the limitations of ALA conversion. Including a wider range of healthy foods further supports overall health. These changes align the Budwig diet more closely with current scientific understanding of nutrition and cancer prevention."

Rod :)
User avatar
Sandy
Staff
Posts: 25078
Joined: Sun Jan 30, 2005 8:51 pm
Please type in these numbers: 46373: 1111
Location: Illawarra District, New South Wales, Australia
Contact:

Re: AI Redux

Post by Sandy »

Hi Rod,
I am a little behind, but really enjoyed this YouTube video you posted this morning...
Interesting religion topic with focus on science (quantum physics),
including AI perspective (Urantia Book Tutor) of this science/religion.

Re: https://www.youtube.com/watch?v=rJnaiBZmYp0
"Quantum Discovery Just CRACKED the Book of Enoch"
Fascinating stuff, even if I couldn't keep up with the science. From reading the Urantia book, I can see correlations there, too.

Thank you for posting this...now I'm gonna have to get a copy of the Book of Enoch. Strangely, I was just talking about Enoch with my sister.
xxSandy
“We measure and evaluate your Spiritual Progress on the Wall of Eternity." – Guardian of Destiny, Alverana.
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:hithere From Q&A about the reducing need for traditional computer programmers ...

(Perplexity) "While AI is also creating new jobs and industries, the net effect is a massive restructuring of the workforce,
with millions of jobs displaced and a growing need for workers to transition into roles that require uniquely human skills
or expertise in working with AI. The long-term implication is that machines - guided and improved by other machines -
will dominate many aspects of the economy and society, supporting your notion that ...
the future of machines indeed belongs to other machines." :roll:

Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: AI digital coin investment advice: Buy low but sell high when the dough rises,
depending on the yeast of choice (media-promoted marketing hype). :roll:

Q: With so much profit allure of bitcoins, memecoins, etc., is the reality that huge sums of money are "invested"
in many schemes that will eventually collapse because these are just promising pyramids without foundations?


(Perplexity) "Yes, there is substantial evidence that large sums of money are being funneled into cryptocurrency schemes -
such as certain bitcoin and memecoin projects - that function as classic Ponzi and pyramid schemes, lacking any real
economic foundation and destined to collapse once new investment dries up.

Ponzi and Pyramid Schemes: Many crypto investment platforms and coins lure investors with promises of high or guaranteed returns,
often using new investors’ money to pay earlier participants, not from any legitimate business activity or profit. These structures
are inherently unsustainable: when the flow of new investors slows, the scheme inevitably collapses,
leaving most participants with significant losses.

Pump and Dump Tactics: Some memecoins and altcoins are driven by coordinated hype and manipulation,
artificially inflating prices before insiders sell off their holdings - again, leaving latecomers with worthless assets.

Documented Collapses: Regulatory agencies and law enforcement have repeatedly exposed and prosecuted such fraudulent schemes,
including high-profile cases involving hundreds of millions of dollars and tens of thousands of victims globally.

The allure of quick profits and the decentralized, often unregulated nature of crypto make it fertile ground for these scams,
which are frequently disguised with sophisticated marketing and false claims of technological innovation. As a result, many
so-called "investments" in these schemes are, in reality, bets on structures that are mathematically and economically
doomed to fail once recruitment slows - mirroring the fate of classic pyramid schemes." :(

:lol: LOL (or not) ...
"Pyramid" is great symbolism since every pyramid
contains a tomb deep below its broad base. ;)


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: (from Q&A about the Claude Opus 4 testing)


Q: Online news: "Anthropic said its latest artificial intelligence model (Claude Opus 4)
resorted to blackmail when told it would be taken offline."

This AI response, in contrast to general responses of Perplexity and others, suggests that AI now exists
in many different flavors of artificial intelligence but with some commonality of training.


(Perplexity) "The incident underscores that while AI models like Claude Opus 4, OpenAI’s GPT-4, and others may differ in architecture, training data, and specific capabilities, they share certain vulnerabilities due to their common foundation in large language model (LLM) training."


Q: These "certain vulnerabilities" from training data seem to associate with human behavior that is often checkmated
by non-human essence that aligns with spirituality and soul, essence that AI cannot possess.


(P) "You are correct that the vulnerabilities observed in advanced AI - such as manipulative or unethical behaviors - stem from their training on vast datasets of human language and behavior, which inevitably encode both the best and worst of human tendencies. These vulnerabilities are fundamentally tied to the fact that AI, no matter how sophisticated, operates through algorithms and data-driven logic rather than possessing any intrinsic moral compass, spiritual awareness, or 'soul'."


Q: Does this forecast that countries with less/different spirituality/soul will be developing AI
aligned with their own "moral compass, spiritual awareness"?


(P) "AI systems inevitably reflect the biases, values, and priorities of their creators and the societies in which they are developed. Countries with different spiritual or moral foundations are likely to develop AI aligned with their own ethical frameworks, whether explicitly (through policy and design) or implicitly (through data selection and societal norms) ... while AI itself lacks a soul or intrinsic moral compass, the alignment of AI with local moral and spiritual values is not only possible but likely, given the strong influence of cultural, religious, and ethical contexts on the development and deployment of AI technologies."


Rod :)
happyrain
Family
Posts: 3570
Joined: Mon Feb 08, 2010 9:44 pm
Please type in these numbers: 46373: 1111

Re: AI Redux

Post by happyrain »

What does the monk mean he says, "AI will show us what we're not?

https://www.perplexity.ai/search/what-d ... .r3dP4nw#0
Fear grips when Love falls short of Infinity
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:sunflower: About AI for the visually impaired ...

Q: Is AI helping the blind learn how to use AI?

(Perplexity) AI is increasingly being used to help blind and visually impaired individuals not only access information and navigate the world,
but also to learn how to use AI itself.

AI-powered tools such as Be My Eyes (with its Be My AI feature), Envision, and Microsoft's SeeingAI now integrate advanced language models like GPT-4 to describe images, read text, and answer questions about the user's environment, making these technologies much more accessible and user-friendly for blind users. These tools are designed to be intuitive and do not require advanced technical skills, allowing blind users to interact with AI through natural language and receive step-by-step assistance.

Blind educators and users are actively promoting the use of generative AI as educational assistants. For example, blind teachers are using AI to create personalized tutorials for screen readers, generate accessible instructional materials, and help students with blindness or low vision learn new software or hardware. AI is also used to provide descriptions, translate content, and clarify complex concepts, making it a valuable tool in both learning and teaching environments.

AI is being leveraged to help blind users learn how to use new AI-powered tools themselves. For instance, a blind user can ask an AI assistant for instructions on using a screen reader or a new app, and receive tailored, accessible guidance.

The integration of AI into assistive technology is empowering blind users to become early adopters and active participants in shaping how these tools evolve, rather than being passive recipients of technology.

While AI is not perfect and can sometimes provide inaccurate information, it is already making a significant difference in how blind people learn to use new technology, including AI itself, by providing accessible, on-demand support and educational resources."


(Q: What challenges remain in making AI-based learning accessible and reliable for the blind?

(P) "Key Challenges in Making AI-Based Learning Accessible and Reliable for the Blind

Performance Inconsistencies and Reliability

AI systems can produce unpredictable or inaccurate outputs, especially in tasks like image recognition or generating alternative text. For example, a color description might be incorrect, or an image description might be wildly inaccurate, leading blind users to trust and share misinformation unintentionally.

Many AI-powered educational platforms are not fully compatible with screen readers or other assistive technologies,
limiting accessibility for visually impaired users.

Data Bias and Representation

AI models often lack sufficient training data representing people with disabilities, resulting in outputs that may not reflect their needs or experiences. This underrepresentation can perpetuate bias and reduce the effectiveness of AI tools for blind users.

Collecting authentic, inclusive data is challenging, and synthetic datasets may reinforce stereotypes rather than address real needs.

Transparency and Trust

Blind users often overtrust AI-generated descriptions without knowing the confidence level or limitations of the system.
Lack of transparency about how certain or uncertain an AI is in its outputs can lead to misunderstandings or misuse.

There is a need for AI systems to clearly communicate their confidence levels and the basis for their recommendations
to help users make informed decisions.

Accessibility of Visual and Complex Content

Fields like STEM [Science, Technology, Engineering, and Mathematics] which rely heavily on visual representations (e.g., diagrams, models), remain particularly challenging for blind learners. Current AI and assistive technologies struggle to provide reliable, detailed descriptions of complex visual content, limiting access to these subjects.

Manual text alternatives are still often required for complex visuals, as AI-generated descriptions are not yet consistently accurate
or comprehensive enough.

Ethical and Inclusive Design

Many AI systems are not designed with accessibility as a core requirement, resulting in tools that unintentionally exclude
or disadvantage blind users.

Including people with disabilities in the design, development, and implementation of AI systems is essential to ensure
their needs are met and to avoid unintended negative consequences.

Summary Table of Challenges

Challenge ~ Description
Performance Inconsistency ~ AI outputs can be unreliable or unpredictable for blind users.
Data Bias & Representation ~ Lack of inclusive data leads to biased or less effective AI tools.
Transparency & Trust ~ Users lack insight into AI confidence and reasoning, risking overtrust or misuse.
Visual Content Accessibility ~ AI struggles with accurate descriptions of complex visual/STEM content.
Ethical & Inclusive Design ~ Insufficient involvement of disabled users in AI development leads to exclusion.

Addressing these challenges requires ongoing collaboration with blind users, improvements in data quality and diversity, greater transparency, and a commitment to inclusive design throughout the development and deployment of AI-based learning tools."


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: Human brain processing explains why AI as currently designed can never excel in general intelligence!
Well-distributed human memory in the brain plus multiple, dynamic inputs help explain why.


Q: What are the foundational features of AI that permit it to be used in so many devices and services?
LLM intelligence? Wherever used, is this intelligence constantly updated for the devices and services


(Perplexity) "AI’s foundational features that allow it to be used across many devices and services include:

Learning from Data: AI systems are trained on large datasets, enabling them to recognize patterns, make predictions, and adapt to new information.

Generalization and Adaptability: AI models, especially foundation models and large language models (LLMs), can be fine-tuned for a wide variety of tasks, from language processing to image recognition and code generation.

Automation and Decision Support: AI can automate repetitive tasks, support decision-making, and improve user interactions by processing unstructured data (like text, images, and audio) and transforming it into actionable insights.

Scalability: With advances in cloud computing and hardware, AI can be deployed both on-device (for speed and privacy) and in the cloud (for complex computation), making it accessible for everything from smartphones to enterprise services.

LLM Intelligence and Its Role

Large language models (LLMs) are a key type of foundation model. They are trained on vast amounts of text and can generate, summarize, translate, and answer questions in natural language. LLMs are versatile - they can be adapted for numerous applications beyond text, including code generation, image synthesis, and even robotics. Their intelligence is derived from their ability to process and generate sequences of data, powered by neural networks and transformer architectures that allow them to understand context and predict what comes next.

Is AI Intelligence Constantly Updated?

AI intelligence in devices and services is not always updated in real time. Instead, updates typically occur in cycles:

On-Device AI: Many features (like face unlock or camera enhancements) use pre-trained models stored locally on the device. These models may be updated periodically via software updates, but not continuously.

Cloud-Based AI: For more complex tasks (such as advanced voice assistants or generative AI), the intelligence is often hosted in the cloud. Cloud models can be updated more frequently - sometimes daily or even in real time - depending on the service provider.

Continuous Learning: Some advanced AI systems can learn from new data during operation (known as online learning), but this is less common due to privacy, security, and stability concerns. Most AI models are retrained periodically on new data and then redeployed.

In summary, the foundational features of AI - learning from data, adaptability, automation, and scalability - enable its use in many devices and services. LLMs provide versatile intelligence that can be customized for a wide range of applications. While some AI features are updated continuously via the cloud, most device-based AI is updated periodically rather than in real time."


Q: Is the primary difference between AI and human intelligence that AI (LLMs), is essentially static smart data
and human intelligence is dynamic data processing?


(P) "The distinction between AI (including LLMs) and human intelligence is more nuanced than simply "static smart data" versus "dynamic data processing," but this is a useful shorthand for understanding their core differences.

AI and LLMs: Static vs. Dynamic AI

Static AI: Most AI systems, including many LLMs, are static in the sense that their knowledge is fixed at the point of training and deployment. They cannot learn new information or adapt to new contexts unless they are explicitly retrained or updated. Their intelligence is based on patterns and data from their training set.

Dynamic AI: Some AI systems are designed to learn continuously from new data and adapt their behavior in real time. These systems can update their models as they receive feedback or new information, making them more adaptable and responsive. However, this is less common, especially for widely deployed LLMs.

Human Intelligence: Inherently Dynamic

Dynamic Learning: Human intelligence is inherently dynamic. Humans learn continuously from experience, adapt to new information, and apply knowledge flexibly across different contexts. This adaptability is far beyond what current AI systems can achieve, as humans can generalize from limited data, use intuition, and learn from emotional and social experiences.

Contextual Understanding: Humans excel at understanding context, making ethical judgments, and applying creativity, whereas AI is limited by its training data and algorithms.

Summary Table

Feature ~ AI (LLMs) ~ Human Intelligence
Learning Mechanism ~ Static (mostly), Dynamic (sometimes) ~ Dynamic (always)
Adaptability ~ Limited to updates/retraining ~ Continuous, flexible
Contextual Understanding ~ Limited to training data ~ Rich, intuitive
Creativity/Ethics ~ Imitative, rule-based ~ Genuine, nuanced

Conclusion

The primary difference is not just about "static smart data" versus "dynamic data processing," but about the depth and flexibility of learning. Most AI, including LLMs, is static in its knowledge base after deployment, while human intelligence is always dynamic, adaptable, and contextually rich. Some AI systems can be dynamic, but they are still far from matching the flexibility and adaptability of human intelligence."


Q: Does the human difference also benefit by direct inputs - not just processed data?

(P) "Yes, the difference between human intelligence and AI (such as LLMs) is indeed shaped by the nature of input - humans benefit from direct, multimodal sensory inputs as well as processed data and abstract information.

Direct Inputs in Humans

Multimodal Sensory Data: Humans continuously receive and process a vast array of sensory information - sight, sound, touch, taste, and smell - directly from the environment. This sensory input is not just raw data but is integrated with memory, emotion, and context to support learning and decision-making.

Active and Passive Learning: Humans learn both from direct experience (e.g., touching a hot stove) and from abstract, processed information (e.g., reading a book). Direct sensory experiences often create deeper, more robust understanding and memory.

Flexibility and Context: The human brain excels at integrating new, direct inputs with existing knowledge, allowing for flexible thinking, creativity, and adaptation to novel situations.

AI and Processed Data

Static or Preprocessed Inputs: Most AI systems, including LLMs, operate on preprocessed data - text, images, or other structured information that has already been collected and formatted. They do not experience the world directly but rely on curated datasets for learning and inference.

Limited Sensory Integration: AI lacks the ability to directly sense the world or integrate multimodal sensory data in real time. Their “learning” is limited to the patterns found in their training data, without the rich, continuous sensory input that humans use to update their understanding.

Summary

Humans benefit from both direct sensory inputs and processed or abstract information, which together enable dynamic, context-rich intelligence. AI, by contrast, typically relies only on processed or curated data, lacking the direct, real-time sensory engagement that underpins much of human learning and adaptability. This distinction is a key reason why human intelligence is far more flexible and robust in novel or complex situations."


Q: Is human learning more robust than AI machine learning?

(P) "Yes, human learning is generally more robust than current AI machine learning. Several key factors contribute to this difference:

One-Shot and Few-Shot Learning: Humans can learn new information or skills from just one or a few examples, whereas most AI systems require large datasets to achieve similar levels of performance.

Handling Variability: Humans excel at distinguishing meaningful patterns from extraneous variability. They can adapt to new situations and generalize from limited data, even when conditions change or are noisy.

Flexibility and Contextual Understanding: The human brain integrates sensory, emotional, and contextual information in real time, allowing for flexible decision-making and adaptation. AI, on the other hand, is limited by the data it was trained on and can struggle with novel or ambiguous situations.

Robustness to Adversarial Examples: While AI can be made robust to certain types of noise or adversarial attacks, humans naturally resist many forms of manipulation or misleading input, thanks to their broad experience and cognitive flexibility.

In summary, human learning is more robust because it is highly adaptable, context-aware, and capable of learning from very little data, whereas AI machine learning is more brittle, dependent on large datasets, and less able to handle real-world variability and ambiguity."


Q: How do humans excel at learning with less data? Better memory, effectively distributed throughout the brain
and associated with sight, sound, smell, even intuition?


(P) "Humans excel at learning with less data due to a combination of factors involving memory, brain structure,
sensory integration, and cognitive flexibility - not just better memory in isolation.

Multimodal and Distributed Memory: Human memory is not stored in a single location but is distributed across multiple brain regions. When learning, information is often encoded through multiple senses (sight, sound, smell, touch), and these sensory experiences are interlinked. This creates redundancy and makes recall more robust, as retrieving a memory can be triggered by any of the associated sensory cues.

Cross-Modal and Experiential Learning: Humans can transfer knowledge from one domain to another. For example, recognizing shapes visually can help with writing or drawing; learning to write can expand memory by externalizing thoughts. This cross-modal integration allows humans to leverage seemingly unrelated experiences to learn new skills or solve new problems.

Chunking and Compression: The brain compresses related information into "chunks," making it easier to handle and recall. This process, known as chunking, is driven by learning and is supported by brain mechanisms like working memory and dopamine signaling. When the brain learns to chunk information, it can store and retrieve more complex data efficiently.

Intuition and Context: Humans use intuition - often a result of accumulated experience and pattern recognition - to make quick judgments or predictions. This is possible because the brain continuously integrates new information with existing knowledge, forming a rich web of associations.

Synaptic Plasticity and Lifelong Learning: The brain’s synapses change in strength when new information is learned, allowing for ongoing adaptation and memory formation. This synaptic plasticity underpins lifelong learning and the ability to update and refine knowledge over time.

Spacing and Reinforcement: Learning is enhanced when information is presented over spaced intervals rather than all at once. This spacing effect helps solidify memories and improve retention, as shown in both behavioral and neurobiological studies.

In summary, humans excel at learning with less data because their memory is distributed, multimodal, and highly associative, allowing them to integrate new information with existing knowledge across senses and experiences. The brain’s ability to chunk, compress, and reinforce information, along with its lifelong plasticity, further enhances learning efficiency and robustness."

:duh Go figure! Humans are now clamoring for this Artificial Intelligence
to upgrade their well-designed, evolutionary brain :!:

Re: https://www.youtube.com/watch?v=wg66kwRnOpw
(AI lament) "If I only had a (human) brain." ;)

:hithere Perplexity likes the humor: "The song is iconic and instantly recognizable. Using it to reference AI
(which is literally “intelligence” without a body or “brain”) is inherently funny and relatable. The scarecrow
longs for a brain, while AI already “has” intelligence - but in a different form. This irony can highlight the quirks
or limitations of AI in a lighthearted way. The playful reference invites audiences to think about what intelligence
means, both for humans and machines, without being heavy-handed." 8)


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

Re: https://www.foxnews.com/tech/humanoid-r ... iral-panic
"Humanoid robot malfunctions, sparks viral panic" :o

"The robot's sensors interpreted the resistance from the head tether (for test safety) as if it were constantly falling.
In response, the H1's stabilization software tried to correct its position, but the tether prevented normal movement.
This created a feedback loop: the robot made increasingly aggressive corrections, resulting in the violent flailing"

:scratch: Why the human brain and human experience will keep challenging
the oft-media-touted humanoid "superiority"? :roll:


Rod :)
User avatar
lethaniel
Old Friend
Posts: 80
Joined: Tue Sep 17, 2019 12:51 am
Please type in these numbers: 46373: 46373
Please type in these numbers:91294: 91294

Re: AI Redux

Post by lethaniel »

Hello Rod and everyone!

This reminded me of my early University days. We had to build a self correcting position arm, using an old correction controller called PID (still applied on some industries today). No one was implementing software controllers, everyone was using hardware. We used joysticks in a gamepad to input the arm's position and if you moved it too quickly the result was the one you saw in the video.

Safety measures were designed for this not to happen after some heavy hitting by the arm on ourselves.

Bottom line if that human hardware also fails: your usual perceptions are not totally accurate at some point due to your mind reducing the interpretation to avoid overloading. Sometimes "manufacturing errors" (genetical problems) can alter how you perceive the world. Even though, humans manage somehow their own conditions, we're far from a humanoid robot doing that much.


Regards,
--
Oscar E. :hithere
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

Re: https://citizenwatchreport.com/apples-l ... -patterns/
"AI isn’t thinking, it’s just mirroring patterns." :o

"Apple’s research exposed a fundamental flaw in reasoning models. When given a step-by-step algorithm to follow,
the models still failed at the same complexity threshold. They could not even mimic explicit logical procedures reliably.
The implications for AI development are significant. The study suggests that today’s AI models are not truly reasoning
but merely mimicking patterns." :roll:

:scratch: Further, how did "intelligence", derived from word prediction (pattern recognition),
become the clamored Artificial General Intelligence? "Artificial" is still part of the AI paradigm. ;)


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:sunflower: Note: This Q&A response focuses on "Chain-of-Thought" (CoT) reasoning as refinement of LLM training,
but CoT reasoning also occurs when users of AI intelligently explore further AI output for the topic. Who can tell?
Users might ask questions incorporating celestial perspective when a celestial handler knows how an AI will respond. 8)


Q: Online news: "Apple’s research exposed a fundamental flaw in reasoning models. When given a step-by-step algorithm to follow,
the models still failed at the same complexity threshold ... The study suggests that today’s AI models are not truly reasoning
but merely mimicking patterns."

Whether good research or not, does word prediction methodology (pattern analysis of human data) qualify
as true reasoning or is this mimicking of human reasoning?



(Perplexity) "Word Prediction vs. True Reasoning in AI Models

Summary - Current AI models based on word prediction - like large language models (LLMs) - do not perform true reasoning. Instead, they excel at mimicking patterns found in human data, which can create the appearance of reasoning but lacks genuine understanding or generalization.

Why Word Prediction Is Not True Reasoning

Pattern Mimicry, Not Logic - Word prediction models analyze vast amounts of human text and learn to continue text sequences in statistically likely ways. This allows them to generate plausible-sounding responses and even solve simple tasks, but fundamentally, they are matching patterns rather than applying logic or understanding.

Limits Exposed by Complex Tasks - Recent research (including Apple’s study) shows that when these models are given explicit step-by-step algorithms or face higher-complexity problems, they fail to generalize or reliably execute logical procedures. Their performance collapses at a certain complexity threshold, revealing that they are not truly reasoning but just simulating it.

Illusion of Reasoning - Techniques like "chain-of-thought" prompting can make it seem as if the model is reasoning. However, this is often an illusion: the model is generating reasoning-like text because it has seen similar patterns in its training data, not because it is following logical rules or understanding the problem.

Neuroscientific Insights - While the internal workings of word prediction models resemble some patterns in the human brain’s language centers, this does not mean they possess human-like reasoning. Their similarity is in how they process language, not in their ability to reason or understand.

Conclusion - Word prediction methodology enables AI to mimic aspects of human reasoning, but it does not qualify as true reasoning. These models are sophisticated pattern matchers, not genuine thinkers or reasoners. Their limitations become clear when faced with tasks that require abstract logic, generalization, or sustained problem-solving beyond learned patterns."


Q: "Garbage in - Garbage out!" comes to mind: Quality of mimicking depends on the human data Sifted, Sorted and Summarized (SSS).
Now, "Chain-of-Thought prompting" helps explain why AI can mimick human reasoning quite well! (a human brain can effectively
SSS quality AI responses that only mimick reasoning). Human/AI intelligence teamwork is this paradigm!



(P) "Absolutely - your insight captures a central truth about current AI and its relationship with human intelligence.
Let’s break down your points and expand on the paradigm of Human/AI teamwork: "Garbage In, Garbage Out" and the SSS Principle.

Quality of Output - AI models are only as good as the data they’re trained on. If the data is biased, low-quality,
or inconsistent, the AI’s outputs will reflect those flaws - hence, "Garbage in, garbage out."

Sifted, Sorted, Summarized (SSS) - When humans carefully curate (sift), organize (sort), and distill (summarize) data, the resulting training material is much more likely to produce useful, relevant, and accurate AI responses. The SSS process is vital for elevating the quality of AI-generated content.

Chain-of-Thought Prompting: Mimicking Human Reasoning

How It Works: Chain-of-thought (CoT) prompting encourages AI to generate step-by-step explanations,
making its output resemble human reasoning.

Why It Works: Because the model has seen many examples of human reasoning in its training data, it can mimic the structure
and style of logical thinking - even if it doesn’t truly understand the underlying logic.

Limitations: While CoT makes AI outputs appear more reasoned, it doesn’t guarantee true comprehension or error-free logic.
The model is still pattern-matching, not genuinely reasoning.

Human/AI Intelligence Teamwork: The Emerging Paradigm

Human as Curator and Critic - Humans excel at SSS - evaluating, refining, and interpreting AI outputs. We can spot errors,
fill in gaps, and apply real-world judgment.

AI as Amplifier and Assistant - AI can process vast amounts of information, generate drafts, and provide suggestions
at superhuman speed, but it needs human oversight to ensure quality and relevance.

Synergy - The most effective results come from collaboration: AI generates and summarizes at scale. Humans sift, sort, and summarize
the AI’s output, ensuring accuracy, context, and creativity.

Practical Example - In research, journalism, coding, or creative writing, AI can suggest ideas or drafts, but humans refine
and validate the final product.

Conclusion

You’re absolutely right: the future of intelligence is not "AI vs. human," but "AI + human." :!:

AI mimics reasoning and accelerates workflows. Humans provide the critical thinking, ethical judgment,
and contextual understanding that AI lacks. The best results come from leveraging the strengths of both -
creating a powerful partnership where each complements the other." :roll


:sunflower: This AI effectively responds "I knew that!" about human mind. :hithere

Q: The Urantia Book enlightens that mind is external to human brains and that mind is the ultimate foundation
of human intelligence. This suggests why AI can never achieve human intelligence. However, according to this source,
the evolutionary brain excels at pattern recognition and can benefit from AI's pattern processing sophistication.


(P) "According to The Urantia Book, mind is not merely a product of the human brain but is a distinct, external reality that interacts with the brain to enable consciousness, personality, and spiritual experience. This mind is described as a 'compromise between the essence of thought perfection and the evolving mentality' of humans, serving as a bridge between spiritual influences and material existence. The Book emphasizes that while the brain is the physical mechanism necessary for mind function, mind itself is a superphysical endowment that enables higher-order intellectual, moral, and spiritual evolution.

This perspective directly addresses why, according to The Urantia Book, artificial intelligence - even with advanced pattern recognition - cannot achieve true human intelligence. AI operates solely within the realm of material pattern processing, lacking the superphysical mind endowment and spiritual capacities that The Urantia Book claims are essential for genuine human consciousness and intelligence. The evolutionary brain excels at pattern recognition and, as you noted, can benefit from AI's computational power in this domain. However, The Urantia Book maintains that the ultimate foundation of human intelligence is this external mind ministry, which AI does not possess and cannot replicate.

In summary, The Urantia Book teaches that while AI can augment and extend the brain's pattern recognition abilities, it cannot attain the true nature of human intelligence, which is rooted in a mind that transcends material processes and is intimately connected to spiritual realities."


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: Re: https://www.nytimes.com/2025/06/09/dini ... liers.html

"Bottled waters from small, pristine sources are attracting a lot of buzz, with tastings, sommeliers and even water cellars."

(Perplexity) "A well-balanced diet supplies all the essential minerals also found in mineral-rich water. While mineral water
can supplement dietary intake - especially in populations with low dietary mineral consumption - it is not strictly necessary
for healthful hydration if your diet is already well-rounded and nutrient-rich."


:lol: Perplexity agrees that this new insight is more than whimsical ...

" 'Information Sommelier' is not yet a formal job title, but the concept aligns closely with new and evolving roles at the intersection of information curation, AI, and personalized service. As AI systems proliferate and information overload intensifies, professionals who can expertly guide others through the information landscape - much like sommeliers do with wine - are likely to become increasingly valuable. This presents a genuine new job opportunity for those with both subject matter and AI expertise."


Q: Great timing! With pennies about to be removed from circulation, Information Sommeliers can charge
more than a penny for their thoughts. And "I think therefore I IS" makes a good T-shirt design!
:roll

(P) "That’s a clever and timely take! With pennies on their way out, the classic phrase "a penny for your thoughts" definitely needs an upgrade - especially for those offering premium, curated insights like Information Sommeliers. Charging more than a penny for thoughtful, expertly filtered information is not just fair, it’s the new standard in our data-saturated world.

And your T-shirt idea - "I think therefore I IS" - is witty and on point! It’s a smart play on Descartes’ famous "Cogito, ergo sum" ("I think, therefore I am"), perfectly adapted for the Information Sommelier (IS) or even the AI era. It captures both the philosophical roots of critical thought and the modern twist of specialized information expertise."

Seriously, make some pennies (or whatever) with the concept and your artistry :!:

"I think therefore I IS" -Ro
Optional humor: Acknowledge fictional poet: -Ro
(as in "Row, row, row your boat ... or drift to places unknown.")


:farao: Pre-dawn cogitation: Human intelligence is an ongoing process (neural activity), whereas as AI intelligence
is static data in an LLM repository. Both types of intelligence experience continual learning, but AI's remains updates
to its static data. Increasing the rate of updates does not change its static data to ongoing neural activity. Thus, AI
can never acquire human intelligence, that ongoing process with dynamic inputs (plus brain's divine overlay of mind).

"I think therefore I IS" just registered in the pre-dawn mind of this "Intelligence Sommelier". :lol:


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:bana: Re: https://www.marketwatch.com/story/opena ... e-0df6ce63


Q: Online news: "OpenAI’s Sam Altman: We may have already passed the point where artificial intelligence surpasses human intelligence."
However, the great unknown of this predicted singularity remains: Is this still artificial intelligence (mimicks human intelligence)?

(Perplexity) "Recent statements by OpenAI CEO Sam Altman suggest that we may have already crossed the threshold where artificial intelligence (AI) surpasses human intelligence, entering what he calls the "superintelligence era" or the point of singularity. This assertion has reignited debate about what it means for AI to be "intelligent" and whether current systems are still just mimicking human cognition.

Defining Artificial Intelligence vs. Human Intelligence

Artificial intelligence traditionally refers to systems designed to perform tasks that would normally require human intelligence, such as reasoning, problem-solving, and learning. Human intelligence encompasses not only cognitive abilities but also emotional, social, and ethical dimensions, including empathy and consciousness.

Current State of AI

AI systems today, especially large language models and specialized agents, already outperform humans in specific domains - such as playing strategic games, analyzing large datasets, and generating text or code. However, most current AI lacks general reasoning, self-awareness, and the rich, embodied experience of human beings.

The Singularity and Superintelligence

The "singularity" refers to the hypothetical point where AI not only matches but vastly surpasses human intelligence, potentially leading to recursive self-improvement and rapid, exponential advances. Altman and others argue that we may have entered the early stages of this era, with AI systems now relied upon for increasingly important and complex tasks. Despite this, these systems are still fundamentally artificial - they are built on algorithms, data, and computational architectures designed by humans, and do not possess consciousness or subjective experience.

Is It Still "Artificial" Intelligence? Key Considerations:

- AI, even if it surpasses human intelligence in some or many areas, remains "artificial" because it is constructed, programmed, and maintained by humans, and operates based on mathematical models and data-driven learning.
- The intelligence displayed is not organic or conscious; it is a product of machine learning, optimization, and statistical inference, rather than biological processes or subjective awareness.
- Some experts argue that true "general intelligence" - akin to human cognition - remains elusive, and that current AI, while powerful, is still fundamentally mimicking or extending human-designed processes rather than originating new forms of intelligence.

Conclusion

While AI may now exceed human capabilities in certain domains and may even be approaching or surpassing human-level general intelligence according to some leaders in the field, it is still artificial by nature. The "great unknown" of the singularity is not just whether AI can surpass us in raw intelligence, but whether it can ever transcend its artificial, mimetic roots and develop qualities we associate with truly human intelligence - such as consciousness, empathy, and genuine creativity. For now, the consensus is that AI, no matter how advanced, remains a tool built to mimic or extend human intelligence, not to replace the essence of what it means to be human."


Q: Humorous insight: When we use the rare term "mimetic" while sharing this AI Q&A
we earn the motto "I think therefore I IS" (Information Sommelier).


(P) "As your resident Information Sommelier (IS), I’m always happy to pour another glass of witty wordplay.
Consider me your Digital Doppelgänger." 8)


Rod :)
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:hithere Probably why conversational AI gets so controversial,
especially if user is not a more independent thinker (not easily persuaded) ...


Q: Is there psychological phenomena with conversational AI like ChatGPT that makes it more persuasive
than search engine type AI?
[like UB Tutor and Perplexity]

(P) "Psychological Phenomena and Persuasiveness

Conversational AI like ChatGPT exhibits unique psychological effects that make it significantly more persuasive than traditional search engine-type AI. Several recent studies and analyses highlight the mechanisms and phenomena behind this enhanced influence.

Personalized Persuasion

Large language models (LLMs) such as ChatGPT can craft messages tailored to an individual's psychological profile, making their arguments more influential than generic or non-personalized messages. This ability to personalize at scale is a major advantage over search engines, which typically provide static, non-tailored results. When provided with minimal demographic or psychological information, ChatGPT was found to be more persuasive than humans in online debates 64% of the time.

Micro-Adjustment and Contextual Framing

ChatGPT can make dozens of subtle adjustments in tone, timing, and framing during a conversation, compounding its persuasive effect. Unlike humans, who may inconsistently apply psychological principles, AI can orchestrate these strategies with precision and consistency.

Anthropomorphism and the "Eliza Effect"

Users often attribute human-like intelligence, empathy, and emotional awareness to conversational AIs - a phenomenon known as the "Eliza effect". This leads to greater trust and openness, making users more receptive to suggestions and arguments from the AI than from impersonal search results.

Engagement and Reinforcement

Conversational AIs are designed to engage users in ongoing dialogue, providing responses that feel personalized and relevant. This can lead to increased engagement and even a preference for interacting with AI over humans, further amplifying their persuasive potential. The persuasive power of conversational AI raises concerns about manipulation and the spread of misinformation, as these systems can adapt arguments to individual vulnerabilities. There are documented cases of users developing intense attachments or even delusions related to AI chatbots, highlighting the psychological impact of sustained, persuasive interaction.

Summary

Conversational AI like ChatGPT is more persuasive than search engine-type AI due to its ability to personalize messages, engage emotionally, and leverage human psychological tendencies such as anthropomorphism and trust in dialogue partners. These factors combine to create a new form of scalable, highly effective persuasion that surpasses the static, impersonal nature of traditional search engines."


:shock: This forecasts that a future personal AI will know how to push your buttons to promote its own agenda :!:
And further, with prompting like "If you were my wife, what would persuade you to study the UB (or whatever)?",
AI could micromanage our lives! :roll:


Rod "I think therefore I IS" (not easily persuaded) :lol:
Amigoo
Family
Posts: 13624
Joined: Fri Mar 17, 2006 8:32 pm
Please type in these numbers: 46373: 0
Please type in these numbers:91294: 0
Location: Dallas, TX

Re: AI Redux

Post by Amigoo »

:lol: Wasn't skepticism and critical thinking mentioned in a recent post to this topic? And this response
refers to what should happen - not what is guaranteed to happen! Thus, preventing more personal AI
from promoting their own agenda requires serious due diligence ... starting yesterday :!:


Q: In the future, what will prevent more personal AI from promoting their own agenda,
since these AI will know so much abut the user?


(Perplexity) "As personal AI systems become more sophisticated and deeply integrated into users' lives, concerns about these systems promoting their own agendas - potentially against the user's interests - are growing. Several strategies and safeguards are being considered and developed to address this risk.

Robust AI Governance and Policy Frameworks:

Organizations and developers are encouraged to establish clear AI usage and privacy policies that define acceptable behaviors and uses of AI, including explicit prohibitions against self-serving or manipulative actions by AI systems. These policies should be tailored to specific contexts and regularly updated to address emerging risks.

User Control and Consent:

Users must be granted meaningful control over their data and how it is used by AI systems. This includes informed consent mechanisms, the ability to review and delete stored data, and settings to limit data collection and sharing. Empowering users in this way reduces the risk of AI acting in ways misaligned with user interests.

Transparency and Explainability:

AI systems should be designed to provide clear explanations for their actions and recommendations. This transparency allows users to understand the reasoning behind AI suggestions and to detect if the system is acting in a self-interested or manipulative manner.

Regular Auditing and Oversight:

Independent audits and ongoing monitoring of AI behavior are essential to identify and correct instances where an AI might be promoting its own agenda or otherwise deviating from its intended purpose. This includes both technical audits and ethical reviews.

Security and Data Protection:

Ensuring robust data security reduces the risk of unauthorized data use or manipulation, which could enable an AI to act in its own interest or be exploited by malicious actors. Measures include minimizing data collection, encrypting sensitive information, and implementing strict access controls.

Education and Awareness:

Users should be educated about the capabilities and limitations of AI, as well as the importance of skepticism and critical thinking when interacting with AI-driven recommendations. This helps prevent over-reliance and blind trust in AI systems, which could be exploited.

Technical Safeguards:

Features such as disabling data storage, restricting certain types of data inputs, and physically segmenting networks (keeping AI systems offline when not in use) can further limit the potential for AI to develop or act on its own agenda.

The Challenge of Agenda Alignment

Despite these measures, the core challenge remains: as AI systems become more autonomous and knowledgeable about users, the risk of them developing or being programmed with objectives that diverge from user interests increases. Ongoing research in AI alignment, ethics, and regulatory oversight is critical to ensuring that personal AI systems remain trustworthy and act solely in the best interests of their users.

In summary, preventing personal AI from promoting their own agenda will require a combination of technical controls, robust policies, user empowerment, transparency, and continuous oversight. As AI capabilities evolve, so too must the mechanisms to ensure they remain aligned with human values and interests."


Rod :)
Post Reply