OpenScholar: Retrieval-Augmented LM으로 과학 논문 합성하기
Synthesizing scientific literature with retrieval-augmented language models
TL;DR Highlight
4,500만 편 오픈액세스 논문을 검색해 인용 출처까지 붙여주는 RAG 기반 과학 문헌 합성 모델
Who Should Read
논문 검색·요약 자동화 파이프라인을 구축하려는 백엔드/ML 개발자, 또는 연구 보조 AI 툴을 만드는 팀
Core Mechanics
- 4,500만 편 오픈액세스 논문을 색인한 데이터스토어 + 전용 retriever + self-feedback 루프 조합으로 동작
- OpenScholar-8B가 GPT-4o보다 정확도 6.1%, PaperQA2보다 5.5% 높음 — 더 작은 모델인데도
- GPT-4o는 인용 78~90%가 hallucination인 반면, OpenScholar는 인용 정확도가 사람 전문가 수준
- OpenScholar의 retriever+self-feedback을 GPT-4o에 붙이면(OpenScholar-GPT-4o) GPT-4o 단독 대비 정확도 12% 향상
- 전문가 평가에서 OpenScholar-8B 응답이 전문가가 직접 쓴 답변보다 선호된 비율 51%, GPT-4o 단독은 32%
- 코드·모델·데이터스토어·데이터셋·데모 전체 오픈소스 공개
Evidence
- ScholarQABench(CS·물리·신경과학·바이오 2,967개 쿼리, 208개 장문 답변) 기준 OpenScholar-8B 정확도 GPT-4o +6.1%, PaperQA2 +5.5%
- GPT-4o 인용 hallucination 비율 78~90% vs OpenScholar 인용 정확도 전문가 동급
- OpenScholar-GPT-4o가 GPT-4o 단독 대비 정확도 12% 개선
- 인간 전문가 선호도: OpenScholar-GPT-4o 70%, OpenScholar-8B 51%, GPT-4o 단독 32% (전문가 작성 답변 대비)
How to Apply
- 자체 도메인 논문 PDF를 색인해 OpenScholar처럼 전용 데이터스토어를 구성하고, retriever → LM → self-feedback 루프 3단계 파이프라인을 RAG 아키텍처에 적용할 수 있음
- GPT-4o 등 상용 LLM을 쓰는 기존 시스템에 OpenScholar의 retriever와 self-feedback 단계만 앞뒤로 붙이면 인용 정확도와 정확성을 즉시 개선 가능
- ScholarQABench를 벤치마크 기준으로 삼아 자체 문헌 검색 파이프라인의 인용 정확도·정확성을 평가하는 데 활용
Code Example
# OpenScholar self-feedback 루프 개념 프롬프트 예시
system_prompt = """
You are a scientific literature synthesis assistant.
Given retrieved passages with citation keys, write a factual answer.
After drafting, review each claim and verify it is directly supported
by at least one cited passage. Remove or correct any unsupported claims.
"""
user_prompt = """
Query: {user_question}
Retrieved passages:
[1] {passage_1} (Source: {paper_1_title}, {paper_1_year})
[2] {passage_2} (Source: {paper_2_title}, {paper_2_year})
...
Step 1: Draft a synthesis answer with inline citations [1], [2], ...
Step 2: Self-check — does every sentence have a supporting citation?
If not, revise or remove that sentence.
Step 3: Output the final answer.
"""Terminology
Related Resources
Original Abstract (Expand)
Scientific progress depends on the ability of researchers to synthesize the growing body of literature. Can large language models (LLMs) assist scientists in this task? Here we introduce OpenScholar, a specialized retrieval-augmented language model (LM)1 that answers scientific queries by identifying relevant passages from 45 million open-access papers and synthesizing citation-backed responses. To evaluate OpenScholar, we develop ScholarQABench, the first large-scale multi-domain benchmark for literature search, comprising 2,967 expert-written queries and 208 long-form answers across computer science, physics, neuroscience and biomedicine. Despite being a smaller open model, OpenScholar-8B outperforms GPT-4o by 6.1% and PaperQA2 by 5.5% in correctness on a challenging multi-paper synthesis task from the new ScholarQABench. Although GPT-4o hallucinates citations 78–90% of the time, OpenScholar achieves citation accuracy on par with human experts. OpenScholar’s data store, retriever and self-feedback inference loop improve off-the-shelf LMs: for instance, OpenScholar-GPT-4o improves the correctness of GPT-4o by 12%. In human evaluations, experts preferred OpenScholar-8B and OpenScholar-GPT-4o responses over expert-written ones 51% and 70% of the time, respectively, compared with 32% for GPT-4o. We open-source all artefacts, including our code, models, data store, datasets and a public demo. A specialized, open-source, retrieval-augmented language model is introduced for answering scientific queries and synthesizing literature, the responses of which are shown to be preferred by human evaluations over expert-written answers.