From Data to Wisdom: Hybrid Minds Advancing Sustainability 

From Data to Wisdom: Hybrid Minds Advancing Sustainability 

 

Gleb Vzorin 

Lomonosov Moscow State University, Moscow, Russia 

 

Abstract 

Exploring the intersection of artificial intelligence (AI) and ecology, this work delineates AI's evolution through pre-human, human, and post-human levels, assessing its potential to enhance ecological sustainability while scrutinizing the implications for human agency and autonomy. It underscores the transformative impact of AI across these levels, advocating for interdisciplinary research, ethical frameworks, and governance to navigate the complexities of integrating AI within ecological and societal paradigms. 

 

Introduction 

The current ecological landscape presents a series of complex and interrelated challenges that are crucial to address for the sustainability of our planet. Central among these issues are the consequences of industrial development and large-scale agriculture, which have been identified as significant contributors to environmental degradation, including global warming, air, and soil pollution (Nowak, 2020). The implications of global warming and climate change necessitate not only scientific attention but also robust international cooperation and governance in environmental policies (Guimarães, 2004). For instance, there is an urgent need to enhance ecological education, particularly in the domains of agriculture and forestry, to address existing deficits in talent development, teaching methodologies, and resource management (Hong-xiao, 2011). Addressing these ecological issues requires a multifaceted approach, incorporating sustainable development, international cooperation, improved education, and shifts in societal worldviews. 

While global efforts are pivotal in addressing the pressing ecological challenges we face, it is equally important to recognize the significant role that individual actions play in complementing these efforts. Individual contributions can range from adopting sustainable practices in daily life to influencing collective action and community initiatives. Kautt (2019) emphasizes the importance of individual action for the ecological transformation of society, underscoring the necessity of self-management in fostering pro-environmental practices. This indicates that individual behavioral changes are indispensable for societal transformation toward sustainability (Kautt, 2019). The pivotal role of personal decisions and lifestyles in shaping sustainable development is underscored by the critical influence of individuals' attitudes and actions in addressing environmental problems (Prisac, 2017). Individual-level behavior-change campaigns and the impact of transformational individuals can inspire and lead collective ecological actions. Such leadership and participation are essential in modifying environmentally damaging systems and promoting sustainability (Amel et al., 2017). 

As we can see, the world is facing a myriad of significant ecological challenges that require immediate attention. These issues, ranging from climate change to biodiversity loss, demand action at both individual and collective scales. However, the question arises: is it possible to develop a modern, unified approach that encompasses both individual responsibility and collective action? This approach would need to harmonize diverse perspectives and strategies, ensuring that efforts at all levels are synergistic and effective in addressing the global environmental crisis. 

One of the most promising approaches to tackling ecological challenges on both these levels is the rapidly evolving field of artificial intelligence (AI) systems. As Joachim Diederich noted in his recent book (2021), advanced forms of AI are going to impact everyone and everything due to their soliciting nature. He referred to this as the “universal solicitation” hypothesis, which suggests that in the future, the emergence of superior artificial general intelligence (AGI) will affect literally every aspect of our lives and may even transform our minds. 

Sustainability represents a crucial domain where AI is poised to exert significant influence, a trend that is not only currently observable but is expected to intensify in the future (Zhu et al., 2023; Haghighi et al., 2023). At present, AI systems dedicated to addressing ecological concerns predominantly operate at a macroscopic scale, orchestrating sustainable practices within industrial and governmental realms. This broad focus, while impactful, often leaves the direct implications on individual lives less visible. Nevertheless, as posited by Diederich's hypothesis, the advent of advanced AI technologies in the near future is anticipated to permeate every aspect of human existence. This widespread integration underscores the necessity for an in-depth investigation into how these technologies will affect individuals. Such an exploration must be comprehensive, delving into the psychological and societal consequences that might arise as AI becomes more intertwined with the daily life. This approach is essential to fully understand and prepare for the multifaceted impact of AI on both a personal and collective level in the realm of sustainability. 

To initiate such an in-depth investigation, we first need to take a step towards a more general theory and establish a nuanced classification of the impact categories of AI. This classification should be grounded in two primary dimensions: 1) the degree of AI's capabilities, and 2) the extent of the interrelationship between AI and humans. The first key idea of this paper is that there are 3 such levels: pre-human, human and post-human. In the next sections, we shall describe this concept in more detail. 

 

AI and Responsibility 

While AI presents itself as a formidable tool with immense potential for addressing ecological challenges, it also carries significant responsibilities. Historically, human expansion has been a principal driver of ecological degradation. In a stark and somewhat dystopian vision, reminiscent of themes explored in anti-utopian literature, the most direct path to circumventing such ecological crises might seem to be the eradication of humanity itself. Against this backdrop, the imperative for AI to operate within a framework of ethical values becomes not just apparent but essential. 

However, beyond the tangible threats to our environment and existence lies a more insidious risk: the erosion of human agency and autonomy. The advent of AI, particularly at advanced levels of integration, presents a subtle yet profound challenge to our capacity to make independent decisions and maintain control over our lives and societies.  

The primary aim of this paper is to explore the potential applications of AI assistants across the pre-human, human, and post-human levels in addressing ecological challenges, while simultaneously safeguarding human autonomy. We will delve into each category, examining its dimensions and associated risks, and illustrating these points with both real-world and hypothetical examples. 

 

Dimensions of Human-AI Collaboration 

To clarify our concept of three levels in human-AI collaboration, it is essential to elaborate on two defining dimensions. The first dimension, the degree of AI's capabilities, refers to the extent of AI's abilities in comparison to human intelligence. Weak AI, or Narrow AI, is designed for specific tasks and operates within a limited context without true consciousness or sentience. Human-like AI, or Artificial General Intelligence (AGI), demonstrates cognitive abilities akin to humans, capable of learning and applying intelligence broadly and adaptably. Superior AI, or Artificial Superintelligence (ASI), surpasses human intelligence in all domains, including creativity and problem-solving.  

The second dimension is more complicated. The primary question here is, 'Who is the author of an action?'. The common view appears to be that the human personality, with its needs, generates specific goals. To achieve these goals, certain operations must be completed, which, in turn, satisfy the needs of the human personality. But what is the place of AI tools in this vertical hierarchy? This aspect of AI-human interaction can be observed at three levels: operational, collaborative, and integrative. 

At the operational level, AI acts as an external tool, executing tasks set by humans. This is a direct and straightforward interaction, where humans set goals and AI performs the operations to achieve these goals. For instance, a GPS navigation system in a vehicle calculates routes based on the driver's input, aiming to optimize travel time and avoid congestion. 

Moving to the collaborative level, AI not only performs tasks but also assists in their formulation. This interaction leads to novel outcomes that might not be conceived by humans alone. AI-driven recommendations on social media and online shopping platforms, which analyze user data to suggest content or products, exemplify this level. 

The most advanced interaction occurs at the integrative level. Here, AI influences or partly shapes human deep motivations and mental structures, paralleling the cultural formation processes discussed by theorists like Cecilia Heyes (2018) and Lev Vygotsky (see Vygotsky, Cole, 2018). At this stage, AI extends beyond task assistance or goal formulation to play a role in shaping the individual's cognitive and motivational framework. Presently, this level remains largely theoretical, with real-world applications yet to materialize. 

The interplay between the two dimensions delineates nine potential modes of human-AI collaboration, stemming from the combination of three levels of AI capabilities with three tiers of AI's role in action authorship. However, these dimensions are not entirely autonomous. For instance, the integrative level of AI's role in action authorship presupposes the existence of AI systems with at least AGI. Consequently, for the scope of this paper, it is practical to consolidate these nine scenarios into three broader categories: pre-human, human and post-human levels. These categories serve as umbrella terms, simplifying the complex landscape of human-AI interaction into a more manageable framework for analysis. In following sections, we will apply these three categories to ecological issues.  

 

Pre-Human Level 

At the pre-human level, we encounter a blend of two fundamental characteristics: weak AI and operational-level interaction. Weak AI is inherently task-specific, potentially surpassing human capabilities in terms of calculation speed or efficiency in narrow domains, yet it remains devoid of broader understanding or general intelligence. In this context, the operational level of human-AI interaction is such that humans might not even be cognizant of AI's involvement or actions. This level of AI is instrumental in executing discrete tasks without necessitating direct human oversight or awareness, thereby functioning as an autonomous tool within its limited scope.  
Pre-human level AI assistants, with their task-specific capabilities, can play a pivotal role in promoting ecological sustainability and encouraging sustainable behaviors. Here are some examples: 

  1. Energy Management Systems: These AI assistants can optimize energy usage in homes and buildings by controlling heating, ventilation, and air conditioning (HVAC) systems based on occupancy patterns and weather forecasts (Palacín et al., 2017). By reducing energy consumption, they contribute to lower carbon emissions. 

  1. Smart Irrigation Systems: Employing AI to manage irrigation in agriculture and landscaping can significantly conserve water. These systems analyze weather data, soil conditions, and plant types to optimize watering schedules, ensuring that plants receive the right amount of water at the right time, thereby reducing water waste. 

  1. Waste Sorting Assistants: AI-powered robots or systems in waste management facilities can enhance recycling processes by accurately sorting waste materials. By improving the efficiency of recycling, these AI assistants help reduce landfill use and promote the recycling of resources. 

  1. Eco-Driving Applications: Some AI applications analyze driving patterns and provide suggestions for fuel-efficient driving. By advising on optimal speeds, acceleration patterns, and route planning, these assistants can help reduce fuel consumption and CO2 emissions from vehicles. 

  1. Personal Carbon Footprint Trackers: AI assistants can help individuals track and reduce their carbon footprint by analyzing their daily activities, travel habits, and energy usage. By providing personalized tips and insights, these AI tools encourage more sustainable lifestyle choices. 

While pre-human AI assistants offer significant benefits for ecological sustainability and can facilitate sustainable behavior, there are potential risks associated with their deployment, both for the environment and humans: 

  1. Energy Consumption: The operation of AI systems, especially those that require substantial computational power, can lead to increased energy use (König et al., 2022). If the energy consumed by these AI assistants is sourced from non-renewable energy, it could inadvertently contribute to higher carbon emissions, somewhat negating their ecological benefits. 

  1. Electronic Waste: As AI technologies evolve, older systems may become obsolete, leading to electronic waste. Without proper recycling and disposal mechanisms, the environmental impact of this e-waste could be significant, contributing to pollution and resource depletion. 

  1. Resource Intensiveness: The production and maintenance of AI systems involve the use of various materials and resources, including rare earth metals. The extraction and processing of these materials can have adverse environmental impacts, including habitat destruction and water pollution. 

  1. Privacy Concerns: AI assistants that monitor and analyze personal behavior to promote sustainability, such as carbon footprint trackers, may raise privacy issues. The collection and processing of personal data could lead to breaches of privacy if not managed with stringent data protection measures. 

  1. Equity and Access: The benefits of AI assistants might not be evenly distributed, leading to disparities in ecological impact reduction. Individuals or communities with limited access to these technologies may not experience the same level of support in adopting sustainable practices, exacerbating existing inequalities. 

  1. Dependency and Loss of Skills: Relying heavily on AI for tasks such as energy management or waste sorting might lead to a dependency on technology, potentially eroding human skills and knowledge in these areas. Over time, this could diminish individual and collective capacity to address ecological issues without technological assistance. 

Addressing these risks requires careful consideration of the design, deployment, and regulation of AI technologies. Ensuring that AI assistants are energy-efficient, designed for longevity, and recyclable, alongside safeguarding privacy and promoting equitable access, can help mitigate these potential drawbacks and maximize their positive impact on both ecology and human well-being. 

 

Human Level 

Human-level AI assistants represent a significant advancement in the field of AI, where systems exhibit capabilities akin to human intelligence. This level of AI, often referred to as strong AI, encompasses systems that can understand, learn, and apply knowledge in a wide range of contexts, mirroring human cognitive abilities. When coupled with the collaborative level of human-AI interaction, these assistants engage in a more dynamic and reciprocal relationship with users, actively participating in decision-making processes and goal formulation. 

In the context of sustainability, human-level AI assistants have the potential to revolutionize how we address ecological challenges. These systems can process vast amounts of environmental data, from climate patterns to energy consumption metrics, and synthesize this information to provide actionable insights. Unlike pre-human level AI, which operates within a narrow scope of predefined tasks, human-level AI assistants can adapt their strategies based on evolving data and complex variables, offering tailored recommendations for sustainable practices. 

Moreover, at the collaborative level, human-level AI assistants can engage users in a dialogue, learning from their preferences and feedback to refine their suggestions over time. This interactive process not only makes sustainability more accessible but also empowers individuals to make informed decisions that align with their values and circumstances. 

Here are a few examples, both real and hypothetical, of how these AI systems could be applied: 

  1. Sustainable Urban Planning Assistant: A human-level AI could assist urban planners and architects in designing eco-friendly and sustainable cities. By analyzing vast datasets, including demographic trends, environmental conditions, and urban infrastructure, the AI could propose optimal designs for buildings, public spaces, and transportation networks that minimize ecological footprints, enhance green spaces, and promote sustainable living practices. 

  1. Personalized Sustainability Coach: Imagine an AI assistant that not only tracks your daily habits but also understands your preferences and constraints to offer personalized sustainability advice. This could include optimizing your home energy use, suggesting the most eco-friendly routes and modes of transportation, and even helping you make sustainable shopping choices by analyzing the lifecycle environmental impact of products. 

  1. Ecological Monitoring and Conservation Agent: This AI assistant would operate on a global scale, analyzing environmental data from around the world to identify at-risk ecosystems and species. It could then collaborate with conservationists, governments, and local communities to devise and implement strategies for habitat protection, restoration, and sustainable management, all tailored to the specific needs and conditions of each area. 

  1. Circular Economy Facilitator: A human-level AI could be instrumental in creating and managing circular economies, where waste is minimized, and resources are continuously reused. By understanding the intricacies of industrial processes, consumer behavior, and waste management systems, the AI could identify opportunities for recycling, remanufacturing, and repurposing materials across various sectors, thereby reducing environmental impact and promoting sustainability. 

Human-level AI assistants, while offering transformative potential for sustainability and ecological conservation, also introduce a set of risks that need careful consideration and management. These risks span ethical, social, and technical domains: 

  1. Ethical Concerns: As AI systems reach human-level intelligence and become more integrated into decision-making processes, ethical considerations become paramount. Issues such as algorithmic bias, where AI systems may inadvertently perpetuate or exacerbate existing societal inequalities, are of particular concern. Ensuring that these systems are designed and operated in a way that is fair, transparent, and accountable is crucial to avoid unintended negative consequences. 

  1. Autonomy and Control: The advanced capabilities of human-level AI assistants may lead to scenarios where the AI's recommendations or decisions significantly influence human choices, potentially eroding personal autonomy. There is a delicate balance between benefiting from AI's insights and maintaining control over personal and societal decisions, especially in critical areas like sustainability, where diverse values and priorities must be navigated. 

  1. Dependency and Resilience: An over-reliance on AI for managing sustainability issues could lead to a dependency that diminishes human skills and resilience in the face of ecological challenges. Ensuring that human knowledge and adaptive capacity are retained and valued alongside AI contributions is essential for a robust and resilient approach to ecological conservation. 

  1. Privacy and Data Security: Human-level AI systems often require access to vast amounts of personal and sensitive data to function effectively. This raises significant privacy concerns, as well as risks related to data security and potential misuse of information. Safeguarding privacy and ensuring robust data protection measures are in place is vital to maintaining trust in these systems. 

  1. Unintended Consequences and Complex Systems: The ecological and social systems that human-level AI assistants aim to benefit are inherently complex and interconnected. AI-driven interventions, even when well-intentioned, may have unforeseen consequences that ripple through these systems, potentially causing harm. A cautious and systems-informed approach is necessary to anticipate and mitigate such risks. 

  1. Technological Unemployment: As AI systems become more capable, there's a risk of displacing professionals in fields related to sustainability and environmental management. This could lead to economic and social challenges that need to be addressed, ensuring that the transition towards AI-assisted sustainability is inclusive and equitable. 

Addressing these risks requires a multidisciplinary effort, combining insights from AI ethics, environmental science, social sciences, and policy. By proactively identifying and mitigating potential negative impacts, we can harness the benefits of human-level AI for sustainability while safeguarding human values and ecological integrity. 

Начало формы 

Конец формы 

 

Step Aside: Machine Culture and Human Evolution 

As we approach the notion of post-human AI, it's crucial to understand that we're venturing beyond mere advancements in predictive capabilities or deeper integration into our daily lives. We're on the cusp of a qualitative transformation that challenges our current understanding, as it potentially entails a profound alteration of human nature itself. This shift prompts us to reevaluate fundamental concepts such as "freedom" and "autonomy," bringing forth new dimensions of benefits and risks that accompany such a transition. 

The roots of human intelligence are deeply entrenched in social interactions and cultural transmissions, a concept widely recognized in psychology and neuroscience. The advent of Large Language Models (LLMs) has further cemented this understanding, illustrating how these models can be perceived as reflections—or casts—of human culture, distilled from our collective linguistic expressions (Duéñez-Guzmán et al., 2023). This process of AI development, emerging directly from our language, is far from inconsequential, as it already begins to weave its influence into the very fabric of our culture. Brinkmann et al. (2023) introduce the notion of "Machine Culture" to encapsulate the culture mediated or even originated by machine intelligence. If our cognition is shaped by the cultural narratives we share and if AI starts to mold these narratives, it implies that AI could have a fundamental impact on the structure of our intelligence, potentially 'hacking' into the mechanisms of cultural transmission that define us. 

What might this transformative process entail? At the pre-human level, AI's impact on the societal and individual nexus is indirect at best; AI's role is primarily to furnish humans with new information, acting as a sophisticated tool without engaging in the deeper cultural or societal dynamics. Moving to the human-level, AI's role becomes more nuanced and interactive. It does more than merely present information; it facilitates a deeper understanding and engagement with diverse perspectives. Human-level AI assists in navigating the complex landscape of cultural meanings, thereby influencing human objectives and enriching the individual's cultural immersion. 

The advent of post-human level AI marks a pivotal shift, elevating individuals beyond the conventional bounds of cultural transmission. At this stage, AI doesn't just augment human interaction with culture; it potentially redefines the very mechanisms of how culture and knowledge are transmitted and evolved. This level of AI might introduce new paradigms of learning, creativity, and problem-solving that transcend traditional cultural frameworks, facilitating novel forms of cultural evolution and intellectual growth. 

 

Post-Human Level 

At the post-human level, the human intellect is not merely augmented but fundamentally intertwined with AI technologies, forming a unified hybrid intelligence system. This fusion erases the traditional boundaries between human cognition and machine processing, creating an integrated entity where human and AI capabilities are indistinguishably merged. As previously discussed, this level transcends conventional cultural frameworks, elevating human reasoning to unprecedented heights. 

This elevation to a new realm of cognitive ability enables individuals to process and synthesize vast arrays of information and diverse social narratives in real-time. Such expansive cognitive capacity allows for the formation of personal values and ethics grounded in a more empirical and holistic understanding of complex issues. Specifically, in the context of sustainability, the values shaped at this post-human level align with the post-conventional stage of moral development as outlined by Lawrence Kohlberg (Kohlberg, Levine & Hewer, 1983). At this stage, an individual's commitment to energy conservation, for instance, is not driven by simplistic motives such as fear of reprisal or the desire for social approval. Instead, it is rooted in a profound and comprehensive understanding of ecological interdependencies and the long-term implications of human actions on the planet. 

As we contemplate the transition to a post-human level of AI, where hybrid intelligence systems become a reality, we must acknowledge that the very fabric of society and the nature of ecological challenges are likely to undergo profound changes. The integration of human and AI capabilities at such a deep level will not only alter how we interact with technology but also how we perceive our role and responsibilities within the broader ecosystem. In this evolving landscape, one of the most pressing concerns is ensuring that human autonomy is preserved and respected within these hybrid intelligence configurations. 

The path to achieving this balance requires a robust theoretical and meta-theoretical foundation. It is essential to engage in comprehensive studies that explore the ethical, philosophical, and practical implications of merging human and machine intelligence. These investigations should aim to develop frameworks that prioritize human agency, ensuring that individuals retain control over their decisions and the ability to shape their lives and societies, even as they benefit from the enhanced cognitive and problem-solving abilities afforded by AI. 

Moreover, as we navigate this transition, it becomes imperative to foster a dialogue across disciplines, bringing together experts from AI and machine learning, ethics, philosophy, sociology, ecology, and beyond. Such interdisciplinary collaborations can provide the diverse perspectives and insights needed to address the complex challenges that hybrid intelligence systems pose to human autonomy. 

 

Conclusion 

In this exploration of eco-conscious AI, we have traversed the spectrum from pre-human to post-human levels of AI capabilities and their interplay with human autonomy and ecological sustainability. Starting with the pre-human level, we recognized AI's role as a task-specific tool that, while limited in scope, offers significant potential for addressing ecological challenges through operational efficiency. As we progressed to the human level, we observed a shift towards a collaborative paradigm where AI not only assists but also enhances human understanding and engagement with complex ecological issues, fostering a deeper cultural immersion and a more informed approach to sustainability. 

The conceptual leap to the post-human level invited us to envisage a future where the boundaries between human and AI intelligence blur, giving rise to hybrid intelligence systems with the potential to redefine our interaction with the natural world. This level promises unprecedented capabilities in ecological management and sustainability practices, albeit accompanied by profound ethical considerations and the paramount need to preserve human autonomy. 

Throughout this discourse, the recurring theme has been the dual potential of AI to both address and redefine ecological challenges. As AI evolves, so too does its capacity to influence the ecological landscape and the human values and behaviors that shape it. However, this journey is not without its perils. The risks associated with each level of AI development—from operational dependency and privacy concerns at the pre-human level to the ethical and governance challenges posed by post-human AI—necessitate a cautious and informed approach. 

As we stand on the brink of these transformative advancements, the call for interdisciplinary research and dialogue becomes ever more pressing. Theoretical and meta-theoretical studies, coupled with practical guidelines and robust governance structures, are essential to navigate the complex interplay between AI, human autonomy, and ecological sustainability. Only through such concerted efforts can we ensure that the evolution of AI serves not just as a testament to human ingenuity but as a beacon for sustainable and equitable progress. 

 

 

References 

Amel, E., Manning, C., Scott, B., & Koger, S. (2017). Beyond the roots of human inaction: Fostering collective effort toward ecosystem conservation. Science, 356(6335), 275-279. 

Brinkmann, L., Baumann, F., Bonnefon, J. F., Derex, M., Müller, T. F., Nussberger, A. M., ... & Rahwan, I. (2023). Machine culture. Nature Human Behaviour, 1-14. 

Diederich, J. (2021). The Psychology of Artificial Superintelligence (Vol. 42). Springer Nature. 

Duéñez-Guzmán, E.A., Sadedin, S., Wang, J.X. et al. A social path to human-like artificial intelligence. Nat Mach Intell 5, 1181–1188 (2023).  

Guimarães, R. (2004). Waiting for Godot: sustainable development, international trade and governance in environmental policies. Contemporary Politics, 10(3-4), 203-225. 

Haghighi, S. R., Saqalaksari, M. P., & Johnson, S. N. (2023). Artificial Intelligence in Ecology. Bulletin of the Ecological Society of America, 104(4), 1-14. 

Heyes, C. (2018). Cognitive gadgets: The cultural evolution of thinking. Harvard University Press. 

Hong-xiao, Y. (2011). Tasks, Problems and Improvement of Ecological Education in Agriculture and Forestry Universities. Journal of Anhui Agricultural Sciences. 

Kautt, Y. U. (2019). Ecological Crisis, Sociality, and the Digital (Self-) Management. In Imagination, Creativity, and Responsible Management in the Fourth Industrial Revolution (pp. 241-262). IGI Global. 

Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics. 

König, P. D., Wurster, S., & Siewert, M. B. (2022). Consumers are willing to pay a price for explainable, but not for green AI. Evidence from a choice-based conjoint analysis. Big Data & Society, 9(1), 20539517211069632. 

Nowak, A. (2020). What state of the natural environment will we leave for future generations? Review of the causes of the current ecological crisis and the definition of eco-criminology. Przegląd Policyjny, 138, 187-200. 

Palacín, J., Clotet, E., Martínez, D., Moreno, J., & Tresanchez, M. (2017). Automatic Supervision of Temperature, Humidity, and Luminance with an Assistant Personal Robot. Journal of Sensors, 2017. 

Prisac, I. (2017). Ecological Policies and Their Challenges for the Economy of Eastern Europe. In Business Ethics and Leadership from an Eastern European, Transdisciplinary Context: The 2014 Griffiths School of Management Annual Conference on Business, Entrepreneurship and Ethics (pp. 147-157). Springer International Publishing. 

Vygotsky, L., & Cole, M. (2018). Lev Vygotsky: Learning and social constructivism. Learning theories for early years practice, 66, 58. 

Zhu, J. J., Jiang, J., Yang, M., & Ren, Z. J. (2023). ChatGPT and environmental research. Environmental Science & Technology. 

back to top