This is a transcript of the 26th January, 2017 edition of the weekly Nature Podcast. Audio files for the current show and archive episodes can be accessed from the Nature Podcast index page (http://www.nature.com/nature/podcast), which also contains details on how to subscribe to the Nature Podcast for FREE, and has troubleshooting top-tips. Send us your feedback to podcast@nature.com.
[Jingle]
Kerri Smith: Coming up, we take a look at the Outer Space Treaty 50 years after its creation.
Michael Listner: Look at the Outer Space Treaty as creating outer space as a national park.
Adam Levy: And, predicting crime to prevent it.
Jeff Brantingham: What crimes have occurred in the past? Where did they occur? And when did they occur? That's the only information that we use.
Kerri Smith: Plus, enhancing the wisdom of the crowd. This is the Nature Podcast for January the 26th, 2017. I'm Kerri Smith.
Adam Levy: And I'm Adam Levy.[Music continues, ends]
Kerri Smith: 50 years ago this week, the Outer Space Treaty was opened for signatures. Adam's been taking a look at how the treaty has evolved and its origins in the Cold War space race.
[Jingle]
Audio clipOctober the 4th, 1957. And the world's press announces a miracle of the age. The Russians have successfully launched the first satellite ever to circle the Earth. Well, Sputnik hurtles its way into space to make a date with history that heralds the dawn of a new era.[Sound of rocket]
Adam Levy: We're now living in that new era heralded by Sputnik some 60 years ago. As the space race between America and the USSR developed, it quickly became clear that the international community needed to agree on what could and could not be done as part of humanity's endeavors to explore outer space. And so, ten years after Sputnik and just two years before the first moon landing, the Outer Space Treaty was opened for signatures.
Niklas Hedman: That time was very much affected by the Cold War and the race for the moon.
Adam Levy: This is Niklas Hedman, chief of the committee policy and legal affairs section at the United Nations office for Outer Space Affairs.
Niklas Hedman: As a result of the two major superpowers at that time and the geopolitical context, this treaty is of course a compromise. Having said that, actually this treaty is putting down the minimum legal requirements for the entire space community to be able to do space business in the future so it's not anarchy in outer space.
Adam Levy: To avoid space anarchy, the treaty specifies that exploration and use of outer space should be carried out for the benefit and interest of all countries and that space should be the province of mankind. But a lot has changed in the last 50 years.
Niklas Hedman: There are two, I would say, really main changes over those years. And that is the technological and scientific advancement in space affairs. The other factor is of course you have far more states using space tools and also other actors. We can see only in the last ten years, really a push from various companies to do space business.
Adam Levy: So with all the developments of the last half-century, is the Space Treaty still relevant?
Michael Listner: It is a limited tool that was designed for a different era.
Adam Levy: This is Michael Listner. He's a space lawyer.
Michael Listner: It's a lot more mundane than it really sounds.
Adam Levy: Michael reckons applying this decades-old treaty to our modern era is a bit of a stretch.
Michael Listner: We're trying to interpret it to fit all the new realities of technology and activities that it's really getting bigger and bigger and bigger. And it's threatened to the point of actually exploding.
Adam Levy: One activity that Michael fears may explode the treaty is a new form of space exploration, space exploration that has a vague different motivation than the missions of the 1950s and 60s.[General space sounds]Audio ClipOur tiny planet sits in a vast sea of resources, including millions of asteroids. The same rocks that could fall from our skies also contain everything we could ever need. It's time someone seized the opportunity. Deep Space Industries.
Adam Levy: This is audio from a promotional video for the company Deep Space Industries. And yes, they're really hoping to one day mine asteroids. What's more, in 2015 the United States passed legislation which would allow exploitation of these resources. But while Deep Space Industries are adamant that this is in line with the Outer Space Treaty, Michael Listner's not quite so convinced.
Michael Listner: Look at the Outer Space Treaty as creating outer space as a national park. Like a national park, you can go to outer space. You can visit it. You can use the water to drink. But what you can't do is you can't go there and start cutting down trees and, or going out there and mining minerals and selling them for profit. So outer space is just the same way. You can go there freely. But the idea is it doesn't allow you to actually turn it into something for a profit.
Adam Levy: But if a country does overstep the bounds of the Outer Space Treaty, the consequences aren't necessarily too dramatic.
Michael Listner: It's the scorn of the international community. It's basically the UN can make a big – could make a big – 'tizzy' of it. But there are really no penalties for, quote, “violating international law,” except for political ramifications.
Adam Levy: As our relationship with space evolves, should our international agreements evolve, too? Niklas Hedman explains that there's no clear agreement in the international community as to whether the treaty still applies to our current space endeavors.
Niklas Hedman: I mean, there are states that advocate indeed, yes, it does because it's so general that it actually can be adapted to the changes. Others say that, no, it is outdated because so much has happened that the treaty needs to be amended.
Adam Levy: In 1967, nobody could've known that countries today would be having legal debates about asteroid mining. And today, nobody can predict what the next 50 years may bring. But to Michael, we are already reaching a point where the treaty may be holding us back from exploring space as fully as possible.
Michael Listner: We're getting to a point technologically where we want to do asteroid mining and space resource mining, where we want to colonize and create settlements. But the Outer Space Treaty is really binding us. Consider the Outer Space Treaty like a big down comforter. It's been around so long, and we've wrapped it – we've wrapped ourselves around it so long that we're really afraid to get out of the comforter and move forward.
Adam Levy: That was Michael Listner, founder of space law and policy solutions based in New Hampshire in the US. Before him you heard from Niklas Hedman at the United Nations Office for Outer Space Affairs in Vienna, Austria.
Kerri Smith: Still to come in the Research Highlights, ants are the masters of walking backwards, and electric cars aren't pollution-free. But before that, Noah Baker headed into downtown London to seek some wisdom from a crowd.[Music and sounds of traffic]
Member of public : I would say 101.
Member of public : 243.
Member of public : 36.
Member of public : 57.
Member of public : 120.
Member of public : 100. [Laughs]
Drazen Prelec: The wisdom of the crowd refers to attempts to sample large numbers, thousands of people's opinions on all kinds of questions. And it's positioned as an alternative to credentialed experts.
Noah Baker: That's Drazen Prelec from MIT in the United States. The question in this case: how many tea bags are in a jar?
Member of public : How many tea bags?
Member of public : 52.
Member of public : Probably.
Member of public : Yeah. Probably 52.
Member of public : I'm gonna say 50.
Member of public : Maybe about 70 I'd say, maybe.
Member of public : 150?
Member of public : 63.
Noah Baker: Granted, these answers are pretty varied. But the idea is that if you ask enough people, then the average answer will be the correct one. The crowd as a whole is wiser than the average individual. But this technique doesn't always work.
Drazen Prelec: It falls down when you have two kinds of information, some information that everybody knows more or less, and then you have information that's available only to a subset. Then the average opinion can be hugely imbalanced.
Noah Baker: This can also be thought of as a common misconception. For example, can you tell me what the capital of New York State is?
Member of public : New York State in the US? New York.
Member of public : Is it Washington?
Member of public : New York City.
Member of public : New York City.
Member of public : New York City? [Laughs] I'm assuming it's New York.
Member of public : Manhattan? No?
Noah Baker: Perhaps reasonably, the majority of people I asked assumed that New York City is the capital of New York State. But in fact, it's the much lesser known –
Member of public : Albany?
Drazen Prelec: So this is what's called an unwise crowd.
Noah Baker: And unwise crowds can give inaccurate answers to questions: not very helpful.
Member of public : I was going to say Manhattan. I haven't got a clue. Oh, my God. This isn't being filmed or anything? How ignorant are we?[Music plays, finishes]
Noah Baker: So how do you improve this technique? Let's take the New York question again but phrase it slightly differently. Is New York City the capital of New York State?
Drazen Prelec: In the traditional wisdom of crowds system, you would simply ask people for their opinion, yes or no. And we take this one step further: we ask the same people, the same panel, the same crowd also to predict what the crowd itself – how the crowd will vote? So they have to offer a prediction in percent terms what percent of the people will say yes, New York is the capital, and what percent will say it is not the capital.
Noah Baker: Scientists can use this percentage like a pass mark for the first question. For example, if the question was, 'is the sky blue?' you can bet that everyone would say yes. And they'd predict that 100 percent of others would know that too. In this case, the pass mark, 100 percent, is very high. But everyone will guess the right answer, and so the pass mark is reached. This is because the sky being blue is common knowledge.But in the case of the New York question – 'is New York City the capital of New York State?' – things might be a bit different. Most people will say that New York City is the capital of New York State because it seems logical. They'll also expect that, say, 90 percent of others will agree with them because they think it's common knowledge. The pass mark is again set very high, at 90 percent, but unlike the blue sky question, that pass mark won't be reached because the actual capital of New York State isn't New York. It's Albany.What the crowd thinks is common knowledge is in fact a common misconception. In cases like this where the crowd is choosing between two answers, Drazen can look at the pass mark and –
Drazen Prelec: If it doesn't meet it, then the other answer is correct. For binary question it's very, very simple.
Noah Baker: Essentially, the crowd thinks it knows something, which in fact it doesn't know in practice, which hints that there's actually a common misconception. So the answer is the other one. But does it work?
Drazen Prelec: So, it's been quite successful. And we tried it in many different domains. And usually it cuts down errors by about 20, 30, 40 percent. In theory, it should be infallible. That's too much to hope for, but it's a robust improvement.
Noah Baker: Of course, it's not infallible. I asked people in downtown London about US geography, and perhaps unsurprisingly, they didn't necessarily know the answer. But what if you were to ask people in New York State, where arguably this specialist knowledge would be a whole lot less specialist?
Drazen Prelec: Yes, that's spot on. And in fact, I would be hesitant in using the method to overturn a very, very strong consensus. For example, if in New York State 95 percent of the people know the answer, I would not overturn a very strong majority like that, because it tends to get unstable at these extremes.
Noah Baker: And what about questions with more than two answers? Well, Drazen argues that this approach will still work there.
Drazen Prelec: Yes, yes. Well, what you have to do then is you have to run what's essentially a round robin tournament. Every answer plays against every other answer. And you decided the winner. And the correct answer is the one that sits on top of this tournament just like in a football championship.
Noah Baker: Drazen suggested that this kind of approach could be interesting if applied to political polling, which has had a bit of a bumpy time of late. Although, it is trickier with political forecasting as there's no right answer, just predictions. Drazen also defended the current political polling methods.
Drazen Prelec: You're really trying to prove an instrument that's already pretty good. The polls didn't get it right, but they came pretty close. So you're really trying to reduce an error that's already small.
Noah Baker: While the maths may get more complicated for multiple-choice questions, ultimately the basis of this algorithm is a very simple bit of logic.
Drazen Prelec: It's a syllogism that would've been familiar to people 2,000 years ago, I think. So in that sense you can leverage high technology. But the ideas are not – do not require anything that wasn't known to Aristotle.
Noah Baker: But sometimes scientists just like to take the long way round.
Drazen Prelec: The older I get, the more surprised I am that things that are just lying around the corner that we don't notice. My colleagues and I arrived at this through a mathematical route that's – that was pretty complicated. But in the end, we said, “Oh, there's good news and bad news.[Music plays]The good news, there's a simple theory, a simple explanation. The bad news is there's a simple explanation.” [Laughs] Because you don't need the mathematics. So it happens. It happens. I think knowledge blinds us to what is just sitting there in front of our eyes.[Music ends]
Kerri Smith: That was Drazen Prelec from MIT in the States speaking with Noah Baker. Thanks to the crowd around King's Cross Station for taking part in our little surveys. You can read more about Drazen's new algorithm over at http://www.nature.com/nature.
Adam Levy: Stay tuned to hear how police are trying to predict crime and the problems that they might run into while doing it. Now, though, is the short, snappy Research Highlights, read by Corie Lok.
[Jingle]
Corie Lok: Electric cars are promoted for their positive impact on the environment, but the story is not that simple, according to a study by economists in the United States. They looked at air pollution from motor vehicles by region in the US. They found significant variation in the benefits of electric cars across the country. For example, in the western part, where a sizeable amount of electricity comes from clean energy sources, electric vehicles produce less air pollution than gas powered cars.But in the Midwest, electric cars actually produce more air pollution because electricity there is generated mostly by coal-fired power plants. The US government pays a subsidy to people who buy electric cars. But the researchers say that this should account for regional variation in environmental impact. You can find out more in the journal, American Economic Review.When ants get lucky and manage to capture a large piece of food, they have to walk backwards to drag that big item behind them. How do they find their way home when walking backwards? To figure this out, researchers studied a species of desert ant in the field. They found that ants that walk forwards adjusted their course by looking at the surrounding scenery. But backwards moving ants stopped occasionally, peeked forward, and then corrected their direction.This shows that the insects were able to translate their forwards view into an internal compass bearing, allowing them to keep their sense of direction, even when moving backwards. Ants are probably using at least two different brain regions for this complex navigational behavior. You can find the study in the journal, Current Biology.[Music plays, finishes]
Adam Levy: The classic trope of many a cop show is the car chase. By its nature a chase always makes it look as if the police are trying to catch up to the bad guys. But several forces have started trying to get ahead by predicting what crimes will happen when. Security dream or problematic policing? Kerri takes a closer look.
Kerri Smith: For most PhD students, winding up in a police car would not count as an achievement. But Aaron Shapiro's day with the St. Louis County Police was a real highlight for him.
Aaron Shapiro: It was very interesting to ride through the town of Ferguson with a police officer in the front seat of his car.
Kerri Smith: Ferguson probably sounds familiar. It's the suburb of St. Louis where, in 2014, black teenager, Michael Brown, was shot by a white police officer. Relations between the black community and the police force hit the rocks. More than a year later, in December 2015 –
Aaron Shapiro: It was still very much on the police departments' minds. They felt as if they were just then coming out of that string of events, the media attention, and the protesters.
Kerri Smith: But Shapiro wasn't there to ask the officers about Michael Brown. He was there because he wanted to learn about a new way St. Louis County was trying to deploy its police officers to fight crime. It's called predictive policing, and the idea is to use data on past crime to forecast future crime. Now, crime mapping is something police departments have always done.
Aaron Shapiro: Think back to the 1950s and 60s and the television images of a map on a police department's wall. And they have thumbtacks where crimes took place and maybe a string connecting the dots.
Kerri Smith: That was all computerized in the 1990s. But the maps were a bit vague. They would tell you what crimes took place over the past couple of weeks or so in a neighborhood. Then it was over to officers to interpret this information. The newer systems for predictive policing use finer grain geospatial data to make predictions themselves. Here's Jeff Brantingham at UCLA. He's the creator of a system called PredPol.
Jeff Brantingham: The big difference here is in the ability to process much larger amounts of data so you can look back more than just a week or two weeks' worth of data. And you can use more sophisticated algorithmic methods to detect what's going on in those volumes of data.
Aaron Shapiro: So instead of a whole neighborhood or a census tract, predictive policing goes all the way down to a street corner. And it can make a fine grain prediction about when a crime would take place within that grid cell and also potentially what kind of crime would take place there.
Kerri Smith: To achieve their predictions, most systems use information on what crime was committed when and where. Some systems add in extra data on local bars, bus stops, schools, weather, and sport schedules. Here's Brantingham again.
Jeff Brantingham: The data that we use is really quite simple. It's historical records of what crimes have occurred in the past. Where did they occur? And when they occur? That's the only information that we use. This is information that police departments – most – the majority of police departments around the planet collect on a routine basis.
Kerri Smith: PredPol and other systems with names like HunchLab, they're generating a lot of buzz. 20 out of 50 of the largest police forces in the US have used a predictive policing system, with another ten or so exploring options, according to a survey done in 2016 by the tech consultants, Upturn. Aaron Shapiro watched the officers in St. Louis County as they got used to their new HunchLab system.
Aaron Shapiro: When the officer would look at the predictions that the software was giving him, he was a bit surprised at some of the places, and he wasn't surprised at some of the other places. We went by some alleyways that the officer would not have parked his car in.
Kerri Smith: But despite that novelty, critics warn that the systems are biased and opaque and could dent civil rights. This week in Nature, Aaron Shapiro points out a few concerns in a comment piece. First he says, “The systems are only as good as the data feeding into them.”
Aaron Shapiro: The data are already biased and reflect biases in the practices of policing in terms of where officers are patrolling and which crimes get labeled as crimes because in a wealthier neighborhood, someone might be more likely to get away with a very minor infraction, whereas in a low-income neighborhood they might be stricter. The concern is that that is reflected in the crime data upon which these algorithms are trained.
Kerri Smith: Plus, the way the systems actually work isn't very transparent. Some, like PredPol, have worked with researchers to publish their algorithms in research papers. But much of the detail remains murky from the point of view of the public. Brantingham's response is that these systems are not functioning alone.
Jeff Brantingham: No matter what technology you put in the field, right, that never replaces the responsibility of police to police constitutionally. Predictive policing really doesn't change that in any fundamental way.
Kerri Smith: But Brantingham does acknowledge that these systems can be pretty hard to test. If a police car is stationed on a street corner and a crime doesn't happen, how do you prove that your algorithm prevented the crime?
Jeff Brantingham: Just as you would think twice about taking a pharmaceutical that didn't have any clinical trials associated with it, you would also worry about putting something into public policy practice without having scientific evidence to back it up in that way. And then it was really in 2010 and '11 that we decided to put into the field a randomized, controlled trial to answer exactly that question.
Kerri Smith: The trial, run in Los Angeles and in Kent, England, suggested that patrols based on predictive data led to a 7.4 percent drop in crime volume. But there's another question mark over these systems. Will some people become unfair targets of suspicion? Most systems don't include information on individuals, but that doesn't mean that people-y features are completely absent. Race, for example, correlates strongly with poverty in the US. Poverty is geographically concentrated. And so, says Aaron –
Aaron Shapiro: There's no way to control for race playing a role in geographic predictions.
Kerri Smith: Back to St. Louis County, who are putting HunchLab into practice in an area where race is never far from people's minds. For now, Aaron thinks that there are policing challenges in some communities that no algorithm will solve.
Aaron Shapiro: I was travelling with a white officer. He lives in a rural county, away from St. Louis County. St. Louis County is – it's a suburb of St. Louis, but it's predominantly black and low-income. And this guy lives in a middle class, majority white community. That contrast struck me as one of the things that the software just can't solve.
Kerri Smith: That was Aaron Shapiro, a doctoral candidate at the Annenberg School for Communication at the University of Pennsylvania. Read Aaron's comment piece for free at http://www.nature.com/news. You also heard from Jeff Brantingham, crime researcher at UCLA and creator of PredPol.
Adam Levy: Time now for this week's News Chat. And Richard Van Noorden joins us in the studio. Hi, Richard.
Richard van Noorden: Hi, Adam.
Adam Levy: Now, of course the news on everyone's mind is kind of Trump themed. But we're gonna take a break from that here and return to that in Backchat. Instead, let's look at a question that's slightly more, I guess, internal to the scientific community, which is peer review. There's been a debate recently about how private peer review.
Richard van Noorden: The idea that peer review should be confidential has been one of the norms of scientific journals for a few decades. Recently, some journals have started to opt for open peer review. Sometimes the name of the person who reviewed a paper is revealed. Sometimes the entire text of the review is revealed. Some journals offer this, and many don't.Now, one scientist is really turning up the heat on one publisher who doesn't offer open review in this case. He wants to publish the text of his review online, and he insists that he never signed any bit of paper saying he couldn't. But the publisher is trying to disagree. So we've got a challenge going on between a scientist and a journal.
Adam Levy: So does the publisher actually specify somewhere that he can't do this? And if they do, then what's the problem?
Richard van Noorden: This is the key question. So the scientist there is John Tenant. And he'd reviewed a paleontology paper. He got the author's permission to post up the text of his review on a website called Publons, where reviewers can get credit for the work they've done. But Publons said, “Oh, hang on a minute. Elsevier, which is the publisher, doesn't let you do this.” And Tenant says, “I didn't sign a confidentiality agreement.” But Elsevier says, “Well, on our general guidelines website, it does say that reviewers shouldn't share information about review with anyone without permission from the author and the editors.”Tenant is a bit tricky on this point. He says, “Well, okay. You didn't point me to this website.” And, in fact, he's posted on his own website a little note that says, “I charge £10,000.00 for peer reviewing.”
Adam Levy: [Laughs]
Richard van Noorden: And he says, “Well, suppose I was to review for someone and not point them to my website. Does this mean I can get money off them?”
Adam Levy: He's clearly pushing the quite hard and clearly cares quite a lot about it. But what are the actual benefits of having peer review more open?
Richard van Noorden: Well, the idea is that it gets people to realise that science isn't just about the sacrament of the final paper, that a lot of work and revisions have gone into making the final paper what it was. That's what people who advocate for open peer review want. And the question this particular dispute, is not so much around that sort of general philosophical idea as it is about was the scientist ever really told that reviews were confidential? And the idea that norms are changing so fast that journals can't really rely on some sort of vague, everybody knows this, concept. “It says so on our website somewhere.”
Adam Levy: So is this situation with Elsevier journals, is that quite typical? How does it affect, say, picking a journal at random, Nature?
Richard van Noorden: Well, Nature, for example, only goes with confidential peer review. Although, Nature Communications, one of the other journals in the Nature stable, they've been trialing open review with the author and editor and reviewers want it.
Adam Levy: I suppose this question of how open peer review is is only really a question is there is a peer review in the first place. And something of a move to not have a paper peer reviewed.
Richard van Noorden: Yeah. This is another interesting idea in publishing. A geneticist, this time, said – he put up a pre-print paper, which is a paper that hasn't been peer reviewed, on a server called 'Bioarchive'. And he said, “I left a comment indicating that I regard this paper as my final version. I'm not going to publish.” I'm raising my hands in a quote marks here. “I'm not going to publish this. I'm never going to submit it to a journal.I put it online, and there it is. And it's done. There's no review there, so that's it.”
Adam Levy: So I suppose that's kind of redefining the pre-print server as just the server. It's not going to be printed.
Richard van Noorden: Exactly. Who's not to say that people couldn't come along and provide reviews if they wanted to afterwards on that server? So he – this geneticist, Graham Coop at the University of California Davis – he wants to kind of experiment with how pre-prints are perceived by researchers. Because I think some scientists perceive them as very provisional, subject to change, not fully worked-out ideas because they haven't been fully peer reviewed. But he's saying, “Well, sometimes that's just it.”So interestingly, other biologists replied to Coop and said, “Yeah. Hey, we do this already.”
Adam Levy: Well, perhaps this is a slightly old-fashioned opinion. I'm beginning to think that my perspective on peer review might be a bit outdated. But isn't peer review meant to uphold a particular quality standard?
Richard van Noorden: Well, the thing is that some very bad papers can get published even though they've passed peer review. It really depends on the standard of peer review of journals. And as we were just discussing, a lot of that is confidential. So as long as you're aware that a pre-print has not been peer reviewed, it might be quite good. It might be bad. If you really want to know, you probably either need to be a specialist in the field, or you need to look around and see what other people are saying about it.So extra caution needed, perhaps, when you're looking at non-peer reviewed paper. But there's no reason why it couldn't be perfectly good and just as interesting as a peer reviewed paper.
Adam Levy: We talk sometimes at Nature and on the podcast about the pressures on younger scientists. And one of those pressures is the pressure to publish. Is that pressure relieved by publishing on a pre-print archive? Would funding buddies be happy with a paper that's only on a pre-print archive?
Richard van Noorden: The problem really is that publishing in a peer-reviewed journal is also a measure of prestige. And if you're a young researcher, you need that prestige. You need that track record to advance your career. And saying that, “I published a great paper on a pre-print server,” at the moment doesn't give you the kind of prestige that hiring committees are looking for. And it's interesting that Graham Coop was the only author on this paper.And he said if he had done this with other students, he probably would have published it in a journal because they need that journal track record. Graham Coop had one other point about whether in the future it's viable that everyone just puts out their work as pre-prints and leaves them there, which is that at least under the current system, every paper gets some kind of peer review because it's organised. If you are going to rely on scientists leaving comments or reviews underneath pre-print papers, you're just not going to get enough unless that activity is itself organized, if someone rounds up three scientists for every paper on Bioarchive and says, “Can you leave a comment, please?” Then we would have organized post-publication peer review. But at the moment, a few papers get an enormous number of comments because they're very interesting or they're very controversial. But most papers on pre-print servers or online get nothing.
Adam Levy: Thank you, Richard. For more on those stories, it's http://www.nature.com/news. And to stay up to date, follow Nature News on Twitter, @NatureNews, or follow the Nature multimedia team, @NaturePodcast. This week on YouTube we've got a video about an AI that can detect skin cancer, http://www.youtube.com/naturevideochannel for that.
[Jingle]
Kerri Smith: As you might have heard, bank chat is back for 2017. And the first episode will be arriving very shortly on your RSS feed. We'll have the latest on Trump's science-related appointments. Plus, if you've ever wondered whether to call your giant science project a framework or a roadmap, we have the answer for you. Well, I can't promise the answer, in fact. But we did discuss the semantics. I'm Kerri Smith.
Adam Levy: And I'm Adam Levy.
[Jingle]