Artificial Intelligence in Spanish Universities: How Students and Institutions Are Adapting


Español
Edificio del Instituto Smithsonian
Edificio del Instituto Smithsonian
David Hou

The rise of generative artificial intelligence (GenAI) is reshaping higher education at a pace few could have anticipated. In Spain, universities and students are navigating a complex landscape of opportunities and risks, as revealed by a recent study conducted by Fundación Conocimiento y Desarrollo (Fundación CYD). This observational research offers an early diagnostic on how AI is being used, perceived, and integrated in academic environments — and where significant gaps remain.

While AI promises personalized learning, streamlined research, and enhanced teaching support, it also raises concerns over plagiarism, bias, and the erosion of student effort. This article unpacks the report’s key findings, methodology, and practical recommendations, with a view to informing institutional strategies and public policy.

Understanding the Research Scope

Fundación CYD’s study draws on two independent surveys: one targeting universities and the other undergraduate students.

  • University survey (April–June 2024): Sent to 80 institutions (50 public, 30 private), with responses from 20 (12 public, 8 private). The goal was to assess institutional AI use, perceptions, and training practices.
  • Student survey (February 2025): Conducted with a representative sample of 800 undergraduates from across Spain (88% public, 12% private universities), balanced by gender and age (18–33 years). This survey explored frequency of AI use, purposes, concerns, and access to training.

The analysis was descriptive, relying on percentage-based results without causal inference or longitudinal tracking. Limitations include the small institutional sample size, reliance on self-reported student data, and lack of direct international comparison.

Institutional AI Adoption and Perceptions

Almost all surveyed universities have adopted AI in teaching, particularly for information retrieval and document editing through tools like ChatGPT or Microsoft Copilot. However, fewer than half use AI for evaluations, bibliographic work, or summaries, and personalized tutoring remains rare.

Training has largely focused on faculty and researchers, with less attention to students. Institutions are aware that students use AI for exam preparation, syllabus review, and doubt resolution, but they worry about:

  • Plagiarism and the difficulty of detecting AI-generated work
  • Algorithmic bias and reduced student effort
  • Economic and training barriers to broader AI integration

Only half of the universities have collaborated with tech companies, and just a third have benefited from external training or free software licenses.

Student Use of AI: High Adoption, Limited Training

The study found that 89% of students use AI, with 35% engaging daily and 44% several times per week. The most popular tools are:

  • Chatbots (81%)
  • Presentation and image generators (47%)
  • Data analysis tools (34%)

Primary purposes include doubt resolution (66%), research and data analysis (48%), and academic writing or correction (45%).

Students recognize AI’s value: 63% believe it significantly improves their performance, and only 4% see no benefit. Nevertheless, concerns persist — 79% worry about security and privacy, and 54% express ethical reservations.

Crucially, 40% report that their university does not promote AI use (and 12% say it is restricted), while only 23% receive active encouragement. Just 34% have received specific AI training, though nearly half of those without it want such instruction.

Policy and Pedagogical Implications

The report outlines several urgent priorities:

  1. Expand AI training for students — covering technical skills and critical thinking.
  2. Redesign assessment methods to minimize automation vulnerabilities, e.g., in-person exams and practical projects.
  3. Establish clear AI use policies in teaching and research, with safeguards for academic integrity.
  4. Build partnerships with tech companies that protect ethics and data privacy.
  5. Integrate AI as a pedagogical tool to enhance learning without fostering over-reliance.

From a public policy perspective, the findings warn against widening digital inequalities. Without equitable access to tools and training, AI could exacerbate existing educational gaps. National strategies should incorporate AI literacy from early education, ensuring that students and institutions can engage critically and responsibly.

Conclusion: The Need for Balanced Integration

AI is now a pervasive part of Spain’s university classrooms, but its integration remains uneven and reactive. With most students already using these tools daily — often without institutional guidance — universities face a choice: remain cautious observers or take the lead in shaping ethical, effective AI use.

The Fundación CYD study makes it clear that policies, training, and pedagogical innovation must keep pace with technological change. The challenge is not whether AI belongs in higher education, but how to harness its potential while safeguarding academic integrity and equity.


Topics of interest

Academia

Reference: [1] Fundación Conocimiento y Desarrollo. Artificial Intelligence and the University: Use and Perception of AI in the University Environment [Internet]. Spain: Fundación CYD; 2025. Available on: https://www.fundacioncyd.org/wp-content/uploads/2025/05/PUBLICACION-Inteligencia-Artificial-y-universidad-8MAI.pdf

License

Creative Commons license 4.0. Read our license terms and conditions
Beneficios de publicar

Latest Updates

Figure.
How Forest Age Shapes the Global Carbon Balance
Figure.
Rewriting E. coli’s Genome: A New Biotech Path to Tackle Nanoplastic Pollution
Figure.
Amazon at risk: deforestation, not just climate change, drives rainfall loss