Global Young Academy 5th International Conference for Young Scientists 2015
DIY/Maker science: Is it time for regulation?

Interaction group leader: Jeremy T. Kerr (jkerr@uottawa.ca), University Research Chair in Macroecology and Conservation
If you wish to access the summary of themes discussed at the Interactive Session (10:30am — 12:00pm) please click here.
The general view in our group is that new regulations around DIY science are premature at this time. We offer some caveats and cautions around this provisional conclusion as well as a suggestion that a broadly-based discussion that encourages thoughtful recognition of the boundaries of ethical and responsible DIY science is worth initiating immediately.
First, we recognize a short list of areas through which regulation or caution around DIY science could be considered.
1. Product safety: items produced using DIY science may not be tested sufficiently to ensure low risk operation.
2. Personal safety: individuals engaging in DIY science may be unaware of or unable to take advantage of strong safety procedures required when performing some kinds of work.
3. Environment: some materials, by-products and products of DIY science may pose environmental hazards.
4. Misuse: A risk that is probably very small and certainly very serious, some DIY science may have deliberately harmful outcomes.
5. Reliability/Reputational risk: scientific credibility is a precious commodity and declarations of discovery that do not stand up to strong testing may undermine public confidence in science very broadly or cause harm.
6. Social and ethical implications of risk: explicit recognition of the ethical boundaries of DIY science may include insufficient accountability to prevent unethical work, an issue that has implications that differ in society based on age or social setting.
Each area of concern could be the subject of expansive discussion. Access to DIY kits, for example, is broad, but mandatory observance of safety rules or product reliability standards simply isn’t there. The question devolves to one around evidence: do we have specific information that there is a problem at this time? There does not yet appear to be systemic evidence of problems, but a point emerging from our discussion that these are early days. As the community grows, the probability of problems likely rises also.
Deliberate misuse of DIY science products and techniques is not a theoretical risk. Risks here are real. However, the magnitude of these risks within the DIY community are probably not any greater than they are in unrelated segments of society. Yet, strong regulations apply to many aspects of institution-based research (e.g. around animal care) that are practically free of regulations in the DIY community. The mismatch of regulations between institutional and DIY communities may create problems if institutional researchers use DIY venues to skirt the rules. And, perhaps doing so should be made very difficult by ensuring that some kinds of research are universally regulated within countries (e.g. animal — based research again).
To regulate or not to regulate?
Because risk analysis lays the foundation for suggesting regulation, we distinguish between the likelihood of a problem and its potential severity. A low likelihood, but catastrophically severe, risk nevertheless suggests that efforts to mitigate the risk are necessary and indeed already exist (e.g. deliberate production of a pathogenic organism). Conversely, a low probability-low severity risk might not suggest the need for regulation (e.g. production of an unreliable but low voltage device). It would be easy to misstate risks of deliberate misuse, but specific information about rates of deliberate misuse is needed. Decisions around creation of new regulations should be based on evidence and excessive imposition of regulations might simply be ignored or unintentionally stifle individuals’ innovation.
Some DIY claims around efficacy or discovery may prove to be false. Many of these claims will be harmless and addressed successfully through existing, open science processes within the DIY community. In this respect, the fundamentals of science do not differ materially from established institutional standards, where conjecture and refutation leads routinely to disproofs of hypotheses or claims published in research journals. It is useful to remember that the distinctions between “institutional science” and the DIY community reflect the trappings of these traditions, not their fundamentals.
“Publication by press release” is a dangerous activity. In common to both the DIY and institutional science communities is the challenge of false claims that receive wide publicity. Claims for imminent, radical new cures to disease, for example, if made prematurely and proven false can severely degrade public confidence in science. The confidence that science enjoys in the general public is strong but is probably also very fragile. False claims, such as the discovery of cold fusion (Pons and Fleishman in the late 1980’s), are quickly and publicly debunked if they have important implications. Science goes on. However, it creates lasting damage by undermining public confidence in scientific results and indeed in scientists themselves. Richard Feynman’s advice that scientists must be extremely diligent in communicating their results applies no less to the DIY community than the institutional scientists.
More troubling are claims that lead to harm. A recent example is the fraudulent work in the Lancet that caused massive controversy over vaccine safety. The anti-vaccine community that has arisen subsequently has killed or permanently harmed many, and causes extensive and needless suffering among those who made complete recoveries from illnesses that vaccines would have prevented. These events will probably remain rare, but they do lasting damage. Ethics matter. It may be helpful for members of the scientific community who are well-versed in the ethics of research and communication to engage the DIY community in dialog.
Just one more thing. Or two.
We strongly encourage an active and broadly-based discussion that includes the DIY and institutional research communities. This discussion should be a two-way street, and it is possible (or likely) that established research would benefit greatly from DIY-flavoured experiences and dynamism. Conversely, the DIY community includes many young people who simply have not yet had the opportunity to scan the horizons widely and may not appreciate the possibility of unintended consequences in their work. Issues around safety of DIY practices are real and become more significant when they involve children, for whom society bears a general responsibility, or could lead to direct harm of DIY practitioners or users of DIY products.
Such a dialog, and perhaps debate, might lead to the recognition of a safe DIY/maker space, within which likelihood and severity of harm are both low. Beyond this safe space, there may be one where the harm become likely or, if the harm is unlikely, it is nevertheless more serious. This is a grey area, at best, where DIY/maker codes of conduct might need to include some regulations. Beyond this, there is a “red zone”, where likelihood of harm is high or the harm would be very large. There be dragons. DIY/maker science should never venture into such areas and regulations with teeth are probably needed (but may already exist in the form of the criminal law, in some cases).
Encouraging development of DIY codes of conduct might help encourage best practices that lower the chances of someone running into the grey (or red) zones. At the same time, codes of conduct are not the same thing as accountability. Such a code of conduct might help prevent overstated claims of discovery that undermine public confidence in science. DIYers should be encouraged to observed such a code of conduct voluntarily, but awareness that venturing into the red zone (e.g. creating products that might malfunction and harm children) might lead to severe — and justifiable — sanctions.
Original notes, unordered.
Possibility for false claims is large and will not always be easily weeded out by the open science model. False claims are not new and can perhaps be addressed through existing regulatory mechanisms. But public confidence in science may be fragile and grandiose or inflated claims can be a challenge. E.g. anti-vaccine, Pons and Fleishman.
Product safety frameworks are intended to handle false claims of efficacy also.
Poland has a very restrictive series of laws for DIY molecular biology.
General reflex when something bad happens to legislate against it. There may already be regulation or existing legislation that addresses it. A knee-jerk legislative response to a particular challenge may well lead to expansive, blunt-instrument consequences that really limit the DIY movement and its many benefits.
DIY/Maker science is clearly accelerating, but many members of this community may be working alone. What prevents people from deliberate misuse is not the accessibility of the science, but the concerns of illegality.
We do not yet know if regulations need to be formalized, but the mechanisms as well as the array of needs and potential risks must be addressed in a broad conversation. Such a conversation would enable the broad community to identify a safe zone, ethically and in terms of risk, for the maker/DIY community. By encouraging and fostering a broad conversation, we would hope to identify that safe space and then also a gray zone where risks and ethical challenges arise, and perhaps a red zone where DIY science should not venture without similar controls and regulations in institutions.
Regulations often apply to institutions but nowhere else. Similarly, ethical rules exist around some kinds of research in institutions, such as those that involve animals, that may not or do not consistently apply outside.
Institutions are accountable for mistakes that create any form of risk. Individuals may not be.
In the community, are there forums or opportunities where people simply pause and ask someone who can speak authoritatively to the question: Should we do this? If such spaces do not exist, how could we create these?
Also, DIY science conducted by very young or inexperienced people: it may be very difficult for these people to understand the implications of their efforts.
And example from Thailand: an inventor co-invented a type of inhaler with a doctor that allowed parents to construct the device. The construction process is sloppy for this device and uses materials that may have no quality control at all. There may be a gap between perceived quality, but there may also be a true gap in the quality between DIY and traditionally-sourced medical devices.
This problem is complex and requires broad conversations. Are scientists who have experience with risks not also ethically-obligated to inform DIY scientists? Are there opportunities, or perhaps obligations, to proactively inform this emerging community. This is not intended to be some sort of condescension or one-way communication, in which established, institutional researchers tell the DIY community how it is. Communications can go in two ways.
How can the GYA facilitate these communications?