/ biocompetitionjudgingoas

Being a judge for the first "Believe in Ohio" competition

I recently participated as a judge in the first Believe in Ohio regional competition, where students present plans for new products and technology, in the hopes of taking home some of the nearly $2 million available in cash awards and college scholarships.

bio logoBelieve in Ohio is a free new program from the Ohio Academy of Science that helps high school students prepare for the future.  The program was developed in collaboration with Entrepreneurial Engagement Ohio with the support of the Ohio Board of Regents and the Ohio General Assembly.  Believe in Ohio is the only Ohio student STEM Education Program to integrate entrepreneurship and innovation as pathways to create future jobs.

The competition is actually divided into three parts – local, regional and state – each with their own level of judging. The regional competition is wrapping up, with winners advancing to the state competition in Columbus later this month. I’d like to help with that one too, but no time this year. Reading about plans on paper is exciting and all (erm…), but I was hoping to see some presentations and maybe even a few working prototypes.

I had 6 plans to judge, all clever ideas, if sometimes lacking in detail. What kinds were the plans? Glad you asked. They were mostly mobile apps, or at least a mobile app was involved. I assume most students are primarily using mobile devices – phones and tablets – and so innovation means a cool mobile app. Mobile development is up and coming, but when it’s the center of your idea, you’d better hope for a million downloads at $0.99 apiece (or for some company to swoop in and buy it from you).

For the most part, every one of them had merit. The devil’s in the details though, and the OAS team definitely wanted the details (read the BIO roadmap to see what they expect). Several had decent detail, certainly appropriate given the participants’ ages, but the others repeated (in a half-dozen ways) the “elevator pitch” and how useful and marketable their idea will be, without sharing exactly how they planned on implementing it.

The greatest challenge for me was being fair but realistic, offering constructive criticism without nitpicking. But what is fair? I had no idea going into this (and still have no idea) what the best ideas look like. What does a really, knock-it-out-of-the-ballpark, awesome idea look like? That’s the baseline for the maximum 40 points each plan could earn, and against which everything else should be compared.

Well, my bit is done now.. my reviews submitted. Here are some random thoughts after the experience.

To other judges:

It’s hard (or it was for me) balancing the fact that students put real time and effort into these (time they could have spent goofing off), with the fact that there’s only so much money, and it should go to those who put in the most time and came up with the best plans.

  • The roadmap explains everything; the competition and differences in plan types, the students’ requirements, and the criteria used in judging. Review it.

  • There are two types of plans. Commercial plans emphasize the technical research behind an idea, geared more towards engineers. Business plans focus on marketability, geared towards the tech-savvy business person. They’re similar in some ways, but different too. You’ll get both kinds, but should judge them differently.

  • Read all the plans before judging them. I judged the first, but felt a little uncomfortable about it. What if I rated too highly, but then the others were amazing? Or too low, and the others didn’t compare? So I read all the other plans and got a better idea of how to rate each one.

Trust the description of the score, not just the points. I struggled with the scoring. How is 12 points out of 40 *good?! *But my thought-process is a product of the reviewing systems I’m most familiar with, such as Amazon. A 12/40 is less than a 2/5 stars on Amazon, and I’d stay away from that product. 3/5 is below average. 4/5 is okay. 5/5 means everything is as expected, no surprises. That’s a flaw in online rating systems, IMO.

If a 40/40 represents an awesome, I-have-to-have-one-of-those-right-now ideas, then their rating system makes sense. Specifically, 36/40 is superior, 24 is still excellent, and even 12 is considered good. If everyone adheres to the guidelines and is willing to rethink what a rating system means, this should give the BIO team a pretty accurate readout. Now if everyone treats 30/40 as “good”, it’s going to be a much closer race and harder to determine clear winners.

To the organizers:

Kudos to the BIO team for all the planning they’ve done. Reading through the site and the roadmap, it’s obvious how much hard work went into making this happen.

Next time, could we have some sample plans and ratings next year? With all personally identifiable information redacted of course. This is the first year for the competition, so those weren’t available. It’d make it easier to know how others have judged plans that were deemed superior, moderate, etc.

Can we see what else was submitted, besides the few plans we personally reviewed? Perhaps, after the state competition is over, you could upload all the plans (possibly even how they were rated)? I’d love to see what everyone came up with.

Can you remove the “overall rating” dropdown? If I gave a score of 12 points, I’m not going to select “superior”. Likewise, 30 is not merely “good”. It’s just duplicate information.

believe in ohio judge scorecard overall rating

To the students:

I had fun seeing what students from around the state could come up with. I have kids of my own who might want to compete one day. It’s incredible that so many people donated time and money so generously, and have the foresight to see that this really is an investment in all of our futures.

  • If you’re writing about a piece of software, don’t talk about “perfecting the app” or “until the app is perfected”. That never happens. The only perfect piece of software is the software that’s never written. Seriously. A piece of software can meet requirements, be thoroughly tested, be reasonably bug free, and quite reliable. But it’s never “perfect”. As long as people are paying for it, *someone *will have to be working on it.

  • Don’t assume your app can be free to consumers, and that businesses will love it so much that they will pay tens of thousands of dollars for it, do all the work of importing there data, and buy equipment for their stores to support it. More than one plan made the assumption that others couldn’t possibly *not *love their app. Mobile apps can be awesome, but it’s a saturated market and tough to get noticed in.

Kudos to all the students who spent their free time writing these plans. I’m truly impressed, and don’t think I would’ve taken the time to do it.. at least, not without a lot of cajoling. I hope any criticism you get is constructive, and that you are able to incorporate it into your plans to make them even better.

Best of luck to everyone!


Grant Winney

Grant Winney

I write when I've got something to share - a personal project, a solution to a difficult problem, or just an idea. We learn by doing and sharing. We've all got something to contribute.

Read More