With the proliferation of AI tools, graduate students often ask themselves how these tools can be ethically used to bolster their research. ChatGPT has gained traction, but its utility is constrained by issues like outdated information, artificial hallucinations, and limited access to scholarly publications hidden behind a paywall. Fortunately, new AI tools have been developed that promote academic exploration and discovery, such as offering generative article summaries, citation chaining, and thematic literature mapping capabilities. This workshop will explore three AI tools—Perplexity, Research Rabbit, and Inciteful—designed to support the research and literature review process. This session will provide a demonstration showcasing their features, ethical usage examples, and guidance and strategies for evaluating AI tools for research in the future.
AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.
The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.
There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.
Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language.
He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”
Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.
As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.
Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.
Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.
Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power.
In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find. It's important to note however that the AI field has had several booms and busts (aka, “AI winters”) in the past, and a sea change remains a possibility again today.
Alphabet-owned DeepMind has turned its AI loose on a variety of problems: the movement of soccer players, the restoration of ancient texts, and even ways to control nuclear fusion. In 2020, DeepMind said that its AlphaFold AI could predict the structure of proteins, a long-standing problem that had hampered research. This was widely seen as one of the first times a real scientific question has been answered with AI. AlphaFold was subsequently used to study Covid-19 and is now helping scientists study neglected diseases.
Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon, in particular, are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.
Much progress has been made in the past two decades, but there’s plenty to work on. Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, commonsense reasoning, and learning new skills from just one or two examples.
AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans, an idea known as artificial general intelligence that may never be possible. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.
There’s a particular type of AI making headlines—in some cases, actually writing them too. Generative AI is a catch-all term for AI that can cobble together bits and pieces from the digital world to make something new—well, new-ish—such as art, illustrations, images, complete and functional code, and tranches of text that pass not only the Turing test, but MBA exams.
Tools such as OpenAI’s Chat-GPT text generator and Stable Diffusion’s text-to-image maker manage this by sucking up unbelievable amounts of data, analyzing the patterns using neural networks, and regurgitating it in sensible ways. The natural language system behind Chat-GPT has churned through the entire internet, as well as an untold number of books, letting it answer questions, write content from prompts, and—in the case of CNET—write explanatory articles for websites to match search terms. (To be clear, this article was not written by Chat-GPT, though including text generated by the natural language system is quickly becoming an AI-writing cliche.)
While investors are drooling, writers, visual artists, and other creators are naturally worried: Chatbots are (or at least appear to be) cheap, and humans require a livable income. Why pay an illustrator for an image when you can prompt Dall-E to make something for free?
Math AIContent makers aren’t the only ones concerned. Google is quietly ramping up its AI efforts in response to OpenAI’s accomplishments, and the search giant should be worried about what happens to people’s search habits when chatbots can answer questions for us. So long Googling, hello Chat-GPTing?
Challenges loom on the horizon, however. AI models need more and more data to improve, but OpenAI has already used the easy sources; finding new piles of written text to use won’t be easy or free. Legal challenges also loom: OpenAI is training its system on text and images that may be under copyright, perhaps even created by the very same people whose jobs are at risk from this technology. And as more online content is created using AI, it creates a feedback loop in which the online data-training models won’t be created by humans, but by machines.
Data aside, there’s a fundamental problem with such language models: They spit out text that reads well enough but is not necessarily accurate. As smart as these models are, they don’t know what they’re saying or have any concept of truth—that’s easily forgotten amid the mad rush to make use of such tools for new businesses or to create content. Words aren’t just supposed to sound good, they’re meant to convey meaning too.
There are as many critics of AI as there are cheerleaders—which is good news, given the hype surrounding this set of technologies. Criticism of AI touches on issues as disparate as sustainability, ethics, bias, disinformation, and even copyright, with some arguing the technology is not as capable as most believe and others predicting it’ll be the end of humanity as we know it. It’s a lot to consider.
To start, deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy. A startup estimated that in teaching one system to solve a Rubik’s Cube using a robotic hand OpenAI consumed 2.8 gigawatt-hours of electricity—as much as three nuclear plants could output in an hour. Other estimates suggest training an AI model emits as much carbon dioxide as five American cars being manufactured and driven for their average lifespan.
There are techniques to reduce the impact: Researchers are developing more efficient training techniques, models can be chopped up so only necessary sections are run, and data centers and labs are shifting to cleaner energy. AI also has a role to play in improving efficiencies in other industries and otherwise helping address the climate crisis. But boosting the accuracy of AI generally means having more complicated models sift through more data—OpenAI’s GPT2 model reportedly had 1.5 billion weights to assess data, while GPT3 had 175 billion—suggesting AI’s sustainability could get worse before it improves.
Vacuuming up all the data needed to build these models creates additional challenges, beyond the shrinking availability of fresh data mentioned above. Bias remains a core problem: Data sets reflect the world around us, and that means models absorb our racism, sexism, and other cultural assumptions. This causes a host of serious problems: AI trained to spot skin cancer performs better on white skin; software designed to predict recidivism inherently rates Black people as more likely to reoffend; and flawed AI facial recognition software has already incorrectly identified Black men, leading to their arrests. And sometimes the AI simply doesn’t work: One violent crime prediction tool for police was wildly inaccurate because of an apparent coding error.
Again, mitigations are possible. More inclusive data sets could help tackle bias at the source, while forcing tech companies to explain algorithmic decision-making could add a layer of accountability. Diversifying the industry beyond white men wouldn’t hurt, either. But the most serious challenges may require regulating—and perhaps banning—the use of AI decision-making in situations with the most risk of serious damage to people.
Those are a few examples of unwanted outcomes. But people are also already using AI for nefarious ends, such as to create deepfakes and spread disinformation. While AI-edited or AI-generated videos and images have intriguing use cases—such as filling in for voice actors after they leave a show or pass away—generative AI has also been used to make deepfake porn, adding famous faces to adult actors, or used to defame everyday individuals. And AI has been used to flood the web with disinformation, though fact-checkers have turned to the technology to fight back.
As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Meta have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or Black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI.
But the hype around generative models suggests we still haven’t learned our lesson when it comes to AI. We need to calm down; understand how it works and when it doesn’t; and then roll out this tool in a careful, considered manner, mitigating concerns as they’re raised. AI has real potential to better—and even extend—our lives, but to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.
Tom Simonite is a former senior editor who edited WIRED’s business coverage. He previously covered artificial intelligence and once trained an artificial neural network to generate seascapes. Simonite was previously San Francisco bureau chief at MIT Technology Review, and wrote and edited technology coverage at New Scientist magazine in London. ... Read more
WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our lives—from culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries.
© 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Generative AI has had a great impact on the creation and use of content in various forms, such as text, music, and art. However, using this technology also involves copyright issues, raising potential legal uncertainty. Developments in AI-driven tools are happening faster than the law can keep pace. So many aspects are still unclear. For example, it could be argued that using content to make datasets in an educational setting can often be seen as "fair use" in US copyright law or fair dealing in Hong Kong. Publishers and copyright owners though have the right to challenge the use and seek compensation for intellectual property violation through the courts. If you use AI-generated content without checking if the generated content is based on copyrighted works, there is a chance of copyright infringement. Further AI tools have the potential to infringe copyright in existing works by generating outputs that closely resemble them.
Given the uncertainty surrounding copyright and AI, as well as the need for clarification on other topics related to the use of AI tools, it is crucial to be aware of the potential risks and take measures to protect ourselves and our works. Here are some recommended guidelines and best practices for utilizing AI in academic and scholarly fields.