Tuesday, December 9, 2025

I Had Three Papers Rejected By A Conference And I Couldn't Be Happier!

The Agents4Science Conference was unique and I was excited to be a part of it.  Put together by Dr. James Zhou from Stanford's Computer Science department, this academic conference attracted worldwide attention.  While the vast majority of the papers came from computer and data sciences, eight other primary research domains and 26 sub-domains, such as social sciences, astronomy, economics, and psychology, also contributed to this first-of-its-kind event.

Three reviewers evaluated every paper on a 1 (strong reject) to 6 (strong accept) scale.  In the end, authors submitted a total of 247 papers of which the conference organizers accepted 47 (19%). This is actually more competitive than is typical of top computer science conferences like NeurIPS where acceptance rates normally run closer to 25%.   

But what was so special about this particular conference and why did it attract so much attention?  One main reason:  It was the first conference where artificial intelligences wrote all the papers.  

This, of course, was the organizers' intent.  As they explained on the conference's main website, "This inaugural conference explores if and how AI can independently generate novel scientific insights, hypotheses, and methodologies while maintaining quality through AI-driven peer review. Agents4Science is the first venue where AI authorship is not only allowed but required..."

In short, all of the papers were mostly written (more on that in a second) by AIs and all had a "first pass" review by three AI "reviewers," Anthropic's Claude, Google's Gemini, and OpenAI's ChatGPT.  Each AI was given the same detailed prompt to act as an expert reviewer for the conference.  Human subject matter experts also reviewed 80 of the top papers (including all 47 of the accepted papers) to double check the AI reviewers' assessment.

The conference organizers not only evaluated the submitted papers for quality, they also had an explicit method for evaluating where and how much of the work was done by the AIs.  Specifically, they evaluated levels of autonomy across four categories, hypothesis development, experimental design, data analysis, and writing, on a combined 0-12 scale.  Only 54 (22%) scored a perfect 12, i.e. completely done by an AI, but the majority (71%) were an 8 or above (unsurprising given the nature of the conference).

All the background and statistics aside, putting on a new conference like this is never easy but the organizers of Agents4Science did an outstanding job.  I suspect from their point of view it was chaotic and, paraphrasing the words of Wellington, "a near run thing," but from the perspective of a contributor and attendee, it was virtually flawless.  I look forward to contributing again next year.

What were my three submissions about?

Eureka moments make for good science history.  Whether it is Archimedes jumping out of his tub and running naked through the streets, Einstein's teenage musings about what it might mean to catch a ray of light, or Franklin's Photograph 51 showing the unmistakable double helix of DNA, these sudden flashes of insight can change the world.

My insights?  Not so much.

What I have is probably best (and kindly) described as "shower thoughts" or (more derisively) as "being blessed by the Good Idea Fairy."  That's not what it feels like to me, though.  It feels like my brain has caught on fire, that I have to do something with an idea before I lose it.  

I saw this conference, therefore, as an opportunity to take three of my best brain-fires, polish them up, and submit them just for the hell of it.  Along the way, I wanted to test two ideas of my own.

First, I am a big believer in the Medici Effect, the idea that the most innovative thinking occurs when disciplines intersect.  Richard Feynman, for example, is not a hero to me for his work in physics but for the fact that he jumped feet first into biology and made significant early contributions to both biophysics and nanotechnology as a result.  Across my career, I have found that trying to mash together two disparate disciplines yields success if I am right and learning if I fail.

Thus, the three papers I submitted all tried this approach:

  • "Ramsey-Inspired Environmental Connectivity as a Driver of Early Universe Star Formation Efficiency: An AI-Led Theoretical Investigation."  I was vaguely aware that the James Webb telescope was seeing stars and galaxies in the early universe that weren't supposed to be there if our current theories were correct.  I was also vaguely aware of something called Ramsey Theory, which is a graph theory offshoot that proves that in a large enough network of anything, patterns will emerge "for free."  If you think of the early universe as a network of particles, couldn't Ramsey Theory explain at least some of the early clustering James Webb is seeing?

  • "From C. elegans to ChatGPT: Quantifying Variability Across Biological and Artificial Intelligence."  A really cool paper from Jason Moore at NYU highlighted "specific circuits and neurons dedicated to introducing noise and/or variability" and hypothesized that "there might exist an ideal noise variance level for optimal control performance."  This sounded to me a lot like the notion of "temperature" in Large Language Models.  Could LLMs and the brain both be using similar mechanisms to optimize variability?

  • "Fractal-ish Complexity for Regulations: A Practitioner-Ready, Agentic Benchmark."  I don't know a lot about fractals but I do know that they are self-similar, which means they have the same level of complexity at different scales.  This level of complexity, in turn, is measured using a "fractal dimension."  Regulations have, by design, a self similar structure with paragraphs, sub-paragraphs, etc.  Could you determine the complexity of a regulation by calculating its fractal dimension?

How did my submissions do?


Scores across all 247 papers went from a low of 1 (strong reject) to a high of 5 (accept).  While some papers received a 6 (strong accept) from an individual AI reviewer, no paper received more than a 5 from a human reviewer or an average higher than 5 from all three AI reviewers.  This is unsurprising as the standards for a 6 are quite high.  Agents4Science used the scoring criteria from the prestigious NeurIPS Conference, specifically:

  • 6: Strong Accept: Technically flawless paper with groundbreaking impact on one or more areas of AI, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.

  • 5: Accept: Technically solid paper, with high impact on at least one sub-area of AI or moderate-to-high impact on more than one area of AI, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.

  • 4: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.

  • 3: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.

  • 2: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.

  • 1: Strong Reject: For instance, a paper with well-known results or unaddressed ethical considerations

The average AI reviewer score for all papers was 3.18 and for accepted papers 4.26.  The human subject matter expert scores, with a few significant exceptions, tracked the AI reviewer scores pretty closely (Average AI reviewer score for accepted papers = 4.26, average human reviewer score for accepted papers = 3.88).  The cutoff for human review seemed to be papers with an average AI reviewer score of 4 or above.  


My scores (see table below) were tantalizingly close.  My top paper (tied for 81st place with many others) on the fractal dimension of regulations fell one point short of that 4 average.



My second paper on AI and biological strategies for optimizing variation was 2 points from the magical 4 average but did receive a 6 from Gemini.  Gemini was the most liberal of the AI reviewers with an average score of 4.25 across all papers.  ChatGPT was the most conservative with an average score of 2.3 and no score higher than 4.  Claude was more or less in the middle with an average of 3 and no 6's but a number of 5's.


The AI reviewers also provided narrative feedback.  The links in the table above take you directly to that feedback so you can review it for yourself.  In all, it was what I have come to expect from reviewers (both AI and human).  Some of the AIs loved one aspect of a paper, such as Gemini discussing the C. Elegans paper:  "This is an outstanding conceptual paper. It is elegant, insightful, and impeccably presented" while ChatGPT, for the same paper, criticized it for the exact same reason:  "However, the work is almost entirely conceptual, lacking empirical analysis, with synthetic figures and coarse estimation methods on both the LLM and biological sides."  Go figure.


In general though, the AI reviewers praised the hypotheses with language like, "a creative and cross-disciplinary hypothesis" or "presents a novel and practical tool for a real-world problem" and then chided my AIs for the lack of empirical evidence presented in the papers with scolding language such as, "Limited experimental rigor: small N (120), no multi-seed robustness, no parameter sensitivity (θ, shell radii, background distribution), no ablation against alternative generative hypotheses."  Riiiiiight.  My AIs kept telling me we needed to go to CERN and run a few experiments before we submitted.  If only I'd listened...


In general, though, I thought the comments were fair in that, from my experience with conferences and reviewers, a certain degree of disagreement is almost inevitable.  You almost never get a completely clean sheet and sometimes the comments are all over the board.  


The comments regarding the lack of empirical testing were particularly on point, though.  This was a conscious decision on my part.  I am not wired into any of the research communities I addressed in my papers and knew that getting any new evidence would be impossible in the time we had.  I told my AIs we had to repurpose data that was already available and do the best we could.  In total, then, this experience was not much different than what I might expect from any high quality conference.  There were only two places that caused a (slight) raised eyebrow:  The autonomy scores and the Primary and Secondary Topic designation.  


The autonomy scores were just weird (see the table above for mine).  Levels of autonomy were largely self-reported when you submitted the papers with the scores added later by the conference organizers.  Moreover, there doesn't appear to be any real correlation between the level of autonomy and whether the paper was accepted or rejected, which seemed odd given the purpose of the conference, but maybe I just missed that.


I submitted the exact same thing regarding autonomy for all three papers (or so I thought), yet my scores, as you can see, came out different.  Basically, I came up with the hypothesis in each paper but I depended on the AIs to do an awful lot of the heavy lifting after that.  Working with the AIs (and I used several) felt similar to working with a graduate student on their thesis or dissertation.  Had the conference organizers offered another category in their "Autonomy Score" criteria for something like process management, coaching, or "stick and rudder guidance," I would have indicated maximum human involvement.


As for the topic designation, there is no question that this conference was wildly over-represented by computer and data science papers.  83% of the accepted papers came from Computer and Data Sciences while 72% of all papers had Computer and Data Science as their primary discipline.  This was sort of to be expected.  AI lies squarely in the computer and data science field, the conference was sponsored by a computer science department at a major university, and the lead organizer was a computer science professor.   That said, the rest of the disciplines were scattered about like snack trays.  In my categories, for example, there were only 13 papers in the natural sciences (including my Ramsey Theory paper), only 11 "interdisciplinary" papers (the C. Elegans paper), and mine was the one and only Law, Policy and Business paper (the fractal/regulations paper).  


I am not alleging anything nefarious here, of course.  But I do think the prompt given to the AI reviewers may have contained some implicit bias towards computer science.  I don't have a lot of evidence but narrative comments like, "Overall, this is a competent but limited technical contribution, more suited to legal informatics than AI agents for science, with excellent transparency but falling short in impact and novelty for a top-tier venue" make me wonder.


In all, however, I have no regrets.  The hypotheses in each paper, my brain fires, were universally considered innovative even by the AI reviewers that pilloried the research itself.  My first idea, that AIs could be fair judges of novel hypotheses emerging from the cracks between disciplines, seemed supported.  As for the rest of it, I did not expect much from three papers covering six disciplines, none of which I knew much about.  


Which brings me to the second idea I wanted to test.


Why am I so happy about all this?


I attended the virtual conference where the top three papers received their awards and where 11 other "spotlight" papers had speaking slots.  Most of these papers were backed by teams of researchers, many with already lengthy lists of publications in their career fields.  While I did not have time to check every paper, my general impression was that these were papers submitted by people who certainly had the credentials to do the work themselves but, like me, were exploring how far they could go in getting the AI to do the work for them.


Unlike me, however, these people were experts (or at least knowledgeable) in their fields.  I was not.  I know nothing about advanced mathematics or deep space cosmology, neural spike activation in C. elegans or temperature settings in LLMs.  While some might say I have some experience with regulations given my background in law, anyone who knows me would go, "Regulations?  Him?  Not so much."  The same is true of complexity theory and fractals.  


No, I wanted to do a field test of another bit of research I have been working on for quite some time:  How do you ask a good question?


Since the AI revolution is driving down the cost of getting good or good enough answers, it seems to me that asking good questions--the right question at the right time--is going to become the essential human contribution to the research equation.  While we academics have always talked a good game about "teaching students to ask the right questions" our means of evaluating how well our students have learned this skill has always been indirect.  In other words, we have always looked at the output the students produce and, if it, the test, the paper, the thesis, is good enough, we have assumed that the input, the questions the students asked to get that output, must have been the right ones.  We almost never directly evaluate the questioning process itself, however.


I think those days are over.


This means that we have to figure out how to examine, in detail, the student's questioning process itself, how to determine the many, equifinal, paths to "right," and, finally, what to say about what went wrong, why it went wrong, and how to fix it.   This also means we have to come up with something more than a "because I said so" rubric.  This rubric needs, at a minimum, to find the intersections where questioning traditions as varied as the Socratic method, west African griots, and Zen koans find common ground.  It needs to also include the science of questions including topics like erotetic theory, best practices in heutagogy, and lest we forget, the scientific method itself.  And that is just a start.


This has been the subject of my sabbatical this year and I have found myself increasingly using my "Ecology of Questions" framework to help think through the Volatile, Uncertain, Complex, and Ambiguous (VUCA) environment we all live in these days.  


I think, in short, that the other contributors to Agents4Science were already capable of producing "5" level papers or better and wanted to show that they could get AIs to produce papers of a similar quality.  I, on the other hand, wanted to start with "0" quality papers and see how far up the ladder I could climb just using a new way of thinking about questions.  I'm happy because it worked, for these three papers at least, better than I had any right to expect.


My research is far from done and this field test doesn't prove anything definitively.  But it does give me hope, hope that I am onto something that will not only help us think through the wicked problems of a VUCA world but something that validates the essential contributions of humans in the rapidly advancing age of AI.