To the Editor

The international Genetically Engineered Machine (iGEM) student competition is both a workbench and a showcase for synthetic biology. The competition is based on a simple idea: synthetic biology engineering principles of standardization, abstraction and modularity can be applied to biotech to make engineering new functions in life systems less intimidating, more accessible and more predictable. This year, iGEM will have been running for a decade, and the organization will celebrate the event with a 'giant jamboree' involving as many as 300 teams. The competition has reached a peak in terms of media impact (with a considerable number of Internet searches and a clear seasonal search pattern fitting the competition calendar (, accessed 17 December 2013), attendance and expectations. As former participants in iGEM (C.V. was a student attendee for three years and M.P. was a team supervisor and judge for six years), we have conducted an analysis of iGEM projects presented over the past 10 years. Our analysis reveals several challenges that the competition must face if it is to remain a flagship of synthetic biology.

iGEM takes place in a pedagogic setting and within a time frame of less than 1 year, in such a way that even undergraduate students without prior training in biology, but with reasonable technical and theoretical support, can participate1. It has been described as a test bed of synthetic biology projects; as an example of engineering ingenuity2; as a framework for increasing interest in 'human practices' (the term used in iGEM referring to ethical, legal and social implications (ELSI)) approaches, such as biosecurity and biosafety3 and intellectual property4; and as a challenge providing leadership to the field5. iGEM seeks to not only educate young students in synthetic biology but also foster other personal abilities, such as self-confidence, creativity and effort. Multicultural and interdisciplinary exchange of knowledge, teamwork, networking and information sharing via public wikis are also part of the additional values of the iGEM experience.

The peak of the iGEM competition is the jamboree, where the teams present their synthetic biology projects conducted during the summer. Since 2011, the competition has been organized in regional jamborees that take place in October, and only a percentage (around one-third) of the teams go to the world championship, which is held the first week of November at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts.

Over the past decade, participation in the competition has expanded from an early group of US projects to >200 teams distributed worldwide (Fig. 1). Even so, the geographical distribution of iGEM participants remains biased toward North America, Eastern Asia and Europe. Every year each of these regions account for around 25% of the participating teams; however, groups from Latin America have been more involved in recent years (Fig. 1a). In terms of awards, Europe has been the most successful continent (Fig. 1b); indeed, there have been three competitions in which all the finalists were of European origin. One team, from Ljubljana, Slovenia, has reached the finalists pool (comprising three teams) and been awarded the grand prize five and three times, respectively.

Figure 1: iGEM attendee analysis.
figure 1

(a) Number of teams attending iGEM by region. (b) Regional origin of attendees. Shading indicates the cumulative number of teams representing each region since 2004. The distribution of awards is represented by circles, whose size is proportional to the number of finalist teams representing each country (smallest circles represent one finalist team, whereas the biggest one represents 14 finalist teams since 2006). For European teams, awards are represented by country (green circles) as well as for the whole continent (orange circle).

In terms of judges, geographical distribution is predominantly local in the regional jamborees (i.e., judges come from where the regional jamboree is based) and international in the world phase. Even so, the diversity is lower among judges than among teams in the world jamboree; overall, there is a strong geographic bias in favor of US and European judges over Asian ones (see and

In terms of funding, an average team attending the regional phase (ten students and two advisors) spends a minimum of $10,000 just for team and individual fees, travel and lodging (this does not include the additional fees when teams advance to the world championship). Added to this is the cost of the rather expensive materials and technologies used to perform often ambitious wet lab experiments in iGEM (such costs can range from thousands to tens of thousands of dollars, depending on the project). In the absence of publicly available data, we estimate (very roughly) that the average cost per team is around $20,000–50,000 per year, suggesting that the cost for all participants in the 2013 competition overall was approximately $4 million–10 million.

Although the total price tag for each year's iGEM is similar to that of a medium-to-large cooperative scientific project, which would be expected to yield important scientific publications and/or patents, it is important to stress that iGEM is an educational program. As such, its success should not be measured in terms of scientific publications or patents. In fact, each iGEM competition typically yields very few scientific publications or intellectual property and, to the best of our knowledge, fewer than half of the finalist projects have been published so far. The rate of published iGEM projects has not risen in line with the maturity of the competition (Table 1).

Table 1 Finalist projects in the iGEM competition, 2006–2013

A set of specialized volunteer judges choose medal and award winners and select which teams will advance from regional competitions to the world championship. In the last world jamboree, 52 judges were in charge of assessing the performance of 146 teams. The majority of judges (76.9%) came from academia (many of which were also team instructors); 9.6% originated from government departments; 5.8% were from companies; and the remaining 7.7% were from the committee (iGEM organizers; see

Whereas in regional competitions judges consider a team's overall project on the wiki, presentation, modeling of the problem, submission and use of BioBrick standards, in the world championship only four aspects of the project are assessed (overall project, wiki, presentation and modeling). Each judge casts votes that are converted into a numerical score in an online-based rubric. There is a double award system. First, medals (gold, silver and bronze) are awarded on completion of a list of requisites, including the construction of new BioBrick parts, the submission of these parts to the Registry of Standard Biological Parts ( and the assessment of the project in terms of safety and bioethics. Second, prizes are awarded by judges to the winner, first runner-up and second runner-up ( Only about one-third of the teams advance to the world championship at MIT, and this rate might become even more competitive in the coming years if attendance continues to rise.

The iGEM competition and the Registry of Standard Biological Parts are two branches of the same tree. In fact, one requisite to earn a medal is to submit at least one biological part, either natural or engineered, to the registry. To prepare a BioBrick part from raw DNA, students have to 'stick' specific prefix and suffix short adapters, including restriction enzyme cutting sites, to the desired DNA sequences to make them suitable for the registry and thus, theoretically, standardizable and module ready. The average number of parts submitted to the registry, around 10 per team, has remained relatively stable throughout the history of the competition, although award-winning teams tend to submit many more, up to hundreds (Fig. 2a). To date, iGEM teams have collectively submitted >12,000 parts to the registry (Fig. 2b). Only around 40% of those have been checked satisfactorily to ensure that they work as expected upon submission (Fig. 2c). It must be stressed that the growing number of parts, the diversity of assembly methods used and the difficulties of performing quality control on a continuous basis (most of which still relies on the controls made by the teams) require iGEM organizers and the registry to undertake characterization, preparation and delivery work of titanic proportions.

Figure 2: iGEM and the Registry of Standard Biological Parts.
figure 2

(a) Average number of parts submitted per team (green bars) and per finalist team (white bars) during 2004–2013. Diamonds (numbered in accordance with Table 1) represent finalist teams. (b) Cumulative number of iGEM parts in the registry since 2004. (c) Proportions of verified parts (labeled as 'working' in the registry ( and untested or nonworking parts submitted to the registry, as stated by iGEM finalist teams in 2006–2013. Finalist teams are numbered according to Table 1. Ambiguous data were found for teams marked with asterisks. (d) Proportion of registry-issued and new parts used by iGEM finalist teams in 2006–2013, as reported on each team's wiki pages.

The link between iGEM and the registry also works the other way around: iGEM teams do not only submit parts but are also encouraged to use the BioBrick standards submitted by other teams in previous editions at their convenience, which are already present in the registry. But an analysis of the de novo versus registry-issued parts (according to the case-by-case information in the wiki of each team) reveals that iGEM teams that have been successful in terms of awards tend to avoid the uncertainties of the parts designed and/or characterized by others and choose to use new, ad hoc DNA parts for a specific purpose (Fig. 2d). As previously mentioned, a cautious attitude toward standard DNA parts seems to be common among participants6,7.

In evaluating the success of iGEM as a didactic endeavor, an interesting comparison can be made with similar efforts, such as the FIRST robotics competition (, devoted to promoting engineering and technology skills among young students (most of whom are in high school). Although the number of students trained during the first decade of the FIRST competition is much higher than in iGEM, the latter has reached—with substantially lower overhead costs —an unprecedented geographical spread in a shorter time. Similarly to that of FIRST, the main outcome of iGEM is educational. The main goal of iGEM is to educate students in synthetic biology, so that they might contribute to transformational advances at some time in the future. Funding in iGEM is thus a long-term educational investment. As stressed by Randy Rettberg, general iGEM coordinator and president of the iGEM foundation, during the 2013 closing ceremony, iGEM aims to foster effort, accomplishment, excellence, respect, cooperation and integrity.

From our analysis of iGEM over the past ten years, we believe that the competition has to adapt if it is to maintain its status as a pillar of synthetic biology and as an example of an exciting and dynamic scientific competition. For it to do so, we believe greater focus should be placed on the quality rather than the quantity of parts in the registry. The increasing number of parts and the worrying trend of adding new ones rather than using standard ones deserve deep reflection by participants and organizers alike. An intelligent strategy is already in place: iGEM teams get more 'points' if they improve the characterization of an existing part. We would suggest an additional improvement strategy consisting of selecting a relatively small number (no more than 100) of parts every year and asking teams to improve their characterization and/or test their performance in a range of hosts and conditions. This would yield a smaller but improved pool of more reliable parts.

From another perspective, one might ask whether the strong linking of iGEM to the BioBrick biological standard is necessary. Is this the only and/or best standard possible? Should molecular cloning–issued biological devices be a requisite for a team to attend the competition? This question is particularly pertinent in an era where not only adaptor-based standard cloning systems, but also zinc-finger nucleases, transcription activator–like endonucleases and clustered regularly interspaced short palindromic repeats (CRISPR)/Cas-based methods8 are playing an increasing part in synthetic biology. In the context of continuously falling costs of DNA chemical synthesis9, the cost of chemically synthesizing all the BioBrick parts used by an iGEM team is relatively affordable: considering 1-kilobase constructs (containing a small promoter, an average-length prokaryotic gene and a terminator) and a low synthesis price (around $0.28 per base pair), an iGEM team could have ten BioBrick standards ready to transform for $2,800. This cost is 10 times higher than that of a BioBrick parts assembly kit, but it is still substantially lower than the other expenses an iGEM team must cover (a regional jamboree registration for the whole team, for instance). Furthermore, if molecular cloning were not needed, students would receive presynthesized constructs quickly (in fewer than two weeks), so more time could be dedicated to characterizing the parts in depth. This fact leads us to suggest that, however radical this initiative may be, any synthetic DNA part, even those lacking BioBrick-ready adapters, should be acceptable in the competition.

Regardless of technical developments, iGEM organizers should also reflect on the open-source nature of biological parts, especially if competition projects are to find use in the biotech sector, an area traditionally based on patents and trade secrets. Ownership and sharing in synthetic biology oscillates between widespread gene patenting and open source, in what has been described as a 'diverse ecology'10. iGEM is inspired by engineering and thus by open-source software and distributed innovation. Because the competition focuses on developing particular synthetic biology applications as well as fundamental tools in a scenario of thousands of building parts, open source might be the most logical choice. As Drew Endy stressed in an interview in 2007, “My hope is that by giving things away I will get more back in the long run.” One successful example of this philosophy is the BioBrick Public Agreement (BPA) from the BioBricks Foundation, which is a free-to-use “legal tool that allows individuals, companies, and institutions to make their standardized biological parts free for others to use” (

However, a clarification of the registry's legal status is desirable to ensure transparency and ensure that transfer to companies can take place in situations where a project shows applications of particular industrial promise4.

Last, we feel that now is an opportune moment for organizers to reevaluate the judging criteria used in iGEM. Judging is more than the final stage of the work; judges' preferences are likely to strongly influence team instructors and advisors and shape their choices for future projects. Given the large diversity of research topics, experimental models and technological choices, award-winning projects tend to be imitated. A clear example of this is the multiple projects aiming to develop a biodecontamination kit, which are recurrently found among finalists (see Peking 2010 (, TU Munich 2013 ( and Dundee 2013 ( However, although almost all projects have a highly applied purpose, judges are typically pleased by aspects such as the 'originality' of the work (it has not been seen before in iGEM) or its 'roundness' (it tells a story, from a very simple idea to a prototype). One example of this is the Groningen 2012 ( project on food spoilage control, which used an original bioprospection strategy to identify and select strong promoter sequences. Beyond the impact this strategy had on the judges, the immediate application to industry (or the 'responsible research and innovation' (RRI) issues, which emphasize the utility and benefit for society and the environment) should be key factors of a successful project. If they were, final rankings would certainly change and successful teams would send a clear message on the trends to follow. Judging is always, but particularly in iGEM, a bidirectional process: it ranks proposals but it also shows the way for forthcoming ones.

Given the wide range of complexity and immediate industrial applicability among iGEM projects, we suggest that the degree of sophistication (for example, the number of biological parts/devices used, the difficulty of the host organism and the complexity of regulation output) should be formally considered as a ranking criterion for judges. This would help to further increase the competitiveness of the projects.

If the competition is to place more emphasis on translating projects into real industrial applications, then more thought needs to be put into judging criteria that reinforce this aspect. At present, prizes perhaps encourage spectacular and audacious basic research, which is often not built upon; each year many teams set up brand new projects unrelated to past efforts, even award-winning ones. A greater proportion of industrial members on the judging committee would have an immediate effect by redirecting the competition from the 'game phase' (preliminary exploration) to the 'real-world phase'. More judges from government with expertise in regulatory, health, agricultural or defense issues may also expand the diversity of views and decrease academic biases. An increased presence of Asian judges in the world jamboree would also be highly desirable. Another suggestion for improving the quality of judging is standardization of the number of judges per team. Although similar numbers are assigned to each track, judges can cast votes for unassigned teams. As a result, some teams often have many more votes (either positive or negative) than others. Judging has improved a lot during the past few iGEM competitions. The online questionnaire introduced in 2012 to be filled in by judges incorporates some suggestions that arose during the 2012 regional jamborees, particularly in Europe. It is arguable whether the machine-based ranking of teams should be corrected with data such as team budget or number of students, advisors or instructors. Given the educational nature of the competition, we suggest that it should.

A greater involvement of ELSI specialists and, particularly, a focus on reflexivity and RRI would also help to shape competition trends by encouraging teams to define their projects with societal and environmental benefits as major goals, along with one of the central aspects of RRI: transparency. Transparency has always been a guiding principle in iGEM, with an open-source–like Internet-based community that shares data, protocols and DNA samples. The economic resources used in iGEM should not be excluded from such information in future. Detailed data on public and private funding as well as their precise assignment throughout the project should be a requisite for each iGEM team. As stated above, we believe that fair judgment is not possible without taking into account the funding-to-results ratio. Determining this ratio is central to assessing productivity of a particular project and of the competition as a whole. Therefore, for the sake of transparency, we propose that participating teams be asked to make their budget public on their wikis.

In summary, we have proposed a range of suggestions that could improve the quality of standards, increase transparency of funding, foster industrial orientation and redefine and enhance judging of the competition. The experience of a decade of iGEM indicates that such redefinition is imperative for this outstanding competition to meet the great expectations of synthetic biology going forward.