The Team Behind Worldmetrics.org - Meet the Analysts Powering One of the Web's Most Cited Research Platforms
Part of our series: The People Behind the Research
Worldmetrics.org has earned citations from Bloomberg, The New York Times, Microsoft, and hundreds of other leading organizations. But who are the people actually producing the data? We sat down with four members of the Worldmetrics research team to find out what drives their work, how they think about accuracy, and what it's really like to build a research platform from the ground up.
Let's start with introductions. James, you've been at Worldmetrics for a while now — can you tell us a bit about your background and what you do here?
James Chen: Sure. I'm the Senior Market Analyst, and I oversee our technology and AI verticals — which, as you can imagine, means my datasets are changing pretty much constantly. Before Worldmetrics, I was a research associate at an analytics consultancy in Vancouver for about four years, building forecasting models for telecom and tech clients. Before that, I did freelance market analysis for trade publications in the Asia-Pacific region. My academic background is in applied statistics from UBC, and I later did a graduate diploma in data science at the University of Melbourne. So I come at this from a very quantitative angle — I want to see the methodology before I trust the number.
And Anna, you come from a very different part of the world. What brought you to Worldmetrics?
Anna Svensson: I'm the Market Intelligence specialist, and my focus is on Nordic and European market trends. I studied economics at Uppsala University in Sweden — did my Master's there — and then spent about six years as an independent economic researcher. I was doing macroeconomic analysis for policy think tanks across Scandinavia, which is a very particular world. Everything has to be bulletproof because policymakers are going to base decisions on your work. After that, I moved into freelance data journalism, covering trade, labor markets, and economic policy for European business publications. Some of my research ended up being referenced in government white papers and academic working papers on EU trade dynamics, which was a proud moment. What drew me to Worldmetrics was the mission of making complex economic data accessible to non-specialist audiences. I'd spent years writing for people who already understood the data — I wanted to reach people who needed it but didn't have the technical background to find it themselves.
Lisa, you bring an engineering perspective to a research platform. How does that work in practice?
Lisa Weber: It works surprisingly well, actually. I'm the Industry Analyst covering manufacturing, logistics, and supply chain research. I have a Master's in Industrial Engineering from TU München, and I spent five years as a research fellow at a German logistics industry association, where I co-authored annual benchmark reports on European freight and warehousing. After that, I did independent consulting — advising mid-market manufacturers on operational data strategy. So my entire career has been about making sure numbers are correct and actionable. At Worldmetrics, I'm responsible for quality assurance across our industrial and infrastructure reports. The engineering mindset is actually a huge asset here because I'm naturally skeptical of data that doesn't have a clear methodology behind it. If I can't trace a number back to its source and understand how it was collected, it doesn't go on the site. It's that simple.
Michael, you manage the overall editorial research pipeline. What does that actually look like day-to-day?
Michael Torres: I'm the Research Lead, so I oversee source verification standards and manage how data moves from raw collection to published report. My background is in public policy — I have a Master's from Georgetown — and I spent seven years at a nonpartisan think tank in D.C., developing quantitative frameworks for evaluating healthcare and education policy outcomes. After that, I freelanced as a research consultant for nonprofits and academic institutions. The common thread through all of it has been this question of: how do you make data trustworthy? At the think tank, if we published a number that couldn't be defended under scrutiny, it could influence policy in the wrong direction. I brought that same mentality to Worldmetrics. Day-to-day, I'm setting sourcing protocols, reviewing verification workflows, and making sure every statistic we publish meets our internal accuracy standards. It's not glamorous work, but it's the foundation everything else is built on.
You all come from very different backgrounds — think tanks, consulting, engineering, journalism. Does that diversity actually help the research, or does it create friction?
James: It helps enormously. I'll give you an example. When I'm putting together a report on AI adoption rates, I'm thinking about it from a statistical modeling perspective — are the sample sizes adequate, is the methodology sound. Anna will look at the same data and immediately flag that the European figures don't account for regulatory differences between EU member states. Lisa will point out that the manufacturing adoption figures don't align with what she's seeing in operational data from German firms. And Michael will ask whether the original source has a conflict of interest. You end up with a much more robust product because each of us is catching things the others might miss.
Anna: Exactly. And it's not just about catching errors — it's about context. A statistic that's technically accurate can still be misleading if it's presented without the right context. That's something I learned covering EU trade policy. A tariff rate means nothing without understanding the trade relationship it sits within. We apply the same thinking to every number we publish.
Lisa: The engineering background gives me a particular allergy to vague data. In manufacturing, if you tell someone a production line has a 95% uptime rate, they need to know exactly how you measured that — what counts as downtime, what's the measurement interval, what's the sample period. I bring that same level of precision to our reports. It might make me annoying to work with sometimes, but the data is better for it.
Michael: I'd say the friction is productive. We disagree about things regularly — which sources are strong enough, which claims need more context, whether a particular data point is ready for publication. But those disagreements always make the final output better. The worst thing a research team can do is agree about everything without questioning it.
What's the process when a new report gets developed? Walk us through it.
Michael: It starts with me and the team identifying a topic based on research demand and editorial priorities. Once we've scoped the report, the assigned analyst — say James, for a tech report — begins pulling data from our verified source database and any new primary sources we've identified. Everything gets logged: the source, the date, the methodology used to produce the statistic, any known limitations.
James: From there, I'll build the analytical framework for the report — identifying the key themes, the data points that support each theme, and the narrative thread that ties them together. I'm not just compiling numbers; I'm constructing an argument that the data supports. That draft then goes through internal review.
Michael: Right. Anna or Lisa or I will review it — someone who wasn't involved in the initial research. We're checking sources, questioning claims, flagging anything that feels underdocumented. If there's a dispute about a data point, we go back to the source. If the source isn't strong enough, the data point gets cut.
Lisa: And then there's the ongoing maintenance. A report published six months ago might have data that's since been updated at the source level. We track those updates and refresh our reports accordingly. It's a continuous process, not a one-and-done publication.
What's the hardest part of maintaining accuracy at scale? You've got thousands of reports covering dozens of industries.
Anna: Honestly, it's the pace of change. Economic data in Europe can shift dramatically with a single policy decision. Brexit reshuffled half the trade statistics I was tracking overnight. You can't just publish a report and walk away — you have to stay engaged with the underlying data.
James: For the tech verticals, the challenge is that new data sources appear constantly, and you have to evaluate each one from scratch. Just because a firm publishes an AI market size estimate doesn't mean their methodology is sound. I've rejected more data points than I've published — and I think that's a sign the process is working.
Michael: At the organizational level, the challenge is consistency. We have to make sure the same verification standards apply across every report, every vertical, every analyst. That's why I invested so much time in building our sourcing protocols — so that there's a shared framework everyone operates within, regardless of their specific subject area.
Lisa: For me, it's the gap between what data exists and what people want data on. In logistics and manufacturing, some of the most important operational metrics are proprietary. Companies don't publish their real efficiency data. So we have to work with what's publicly available and be honest about the limitations. I'd rather publish a smaller, fully verified dataset than a large one with gaps I can't account for.
Last question — what do you want people to understand about Worldmetrics that they might not see just from browsing the site?
Michael: That there's a real process behind every number. I think people sometimes see a statistic on a website and assume it was just copied from somewhere else. Every data point on Worldmetrics has been traced to its source, evaluated against our standards, and reviewed by someone whose job is to question it. That process is invisible to the reader, but it's the most important thing we do.
Anna: That we genuinely care about making data accessible without dumbing it down. There's a difference between simplifying a statistic and stripping it of the context that makes it meaningful. We try very hard to walk that line.
James: That we're not afraid to say "we don't know" or "the data isn't strong enough." Not publishing something is sometimes the most important editorial decision we make.
Lisa: That the quality is personal. Every report with my name on it reflects my professional standards. I've spent my entire career trying to get data right, and I'm not going to stop now just because we're publishing at scale. I think everyone on the team feels the same way.
Worldmetrics.org publishes over 3,000 free research reports across 50+ industries. Explore their full library at worldmetrics.org/topics.