2021 Regional Review

Introduction

Another CPTC season is in full swing. This year the theme for CPTC is Le Bonbon Croissant, a French candy factory similar to several famous chocolate factories. We also have a brand new CPTC website at cp.tc, which includes our new hall of fame page (shoutout to Colum, Brock, and Lucas). This year the regional events were bigger than ever before! We had seven different regional events, each with their own winner, and eight at-large, or wildcard, winners, making a total of 15 finalists. We ran these events concurrently over two separate weekends, meaning we were hosting multiple regional events each weekend. These were pretty massive operational weekends for us, but the teams pulled the events off with only minor issues (like having to revert our tooling and deploy to an unplanned hosting provider). The following are the fifteen total regional winners this season:

Regional Winners

  • US New England: Rochester Institute of Technology

  • US Northeast: Carnegie Mellon University

  • US Southeast: University of Florida

  • US Central: Dakota State University

  • US West: California State Polytechnic University, Pomona

  • Europe: Masaryk University

  • Middle East: Princess Sumaya University for Technology

At-Large Winners

  • DePaul University

  • Drexel University

  • California State University, Fullerton

  • Stanford University

  • The University of Tulsa

  • Tennessee Technical University

  • University of New Haven

  • University of West Florida

Scoring Changes

This year I wanted to take a critical look at how we were scoring our game, so I switched my role from OSINT/World director to Scoring Director. Subsequently, this year involved a lot of changes to both our scoring rubrics and how we qualified teams for the final event. Our ultimate goal is to further balance the game and give students more opportunity to differentiate their performance. For example, one change is we wanted to put more of an emphasis on student interactions as opposed to final presentations. This means both injects and compliance scores will have an increased weight this year. Technically this means that some of our scoring categories have shifted, such technical findings and reporting moved from 40% each to 35% each of the final grade. We used this extra 10% (5% from each category), to bump injects, compliance, and interactions up from 20% to 30% of the final grade. Practically that means we put a bigger emphasis on the interactions throughout the event this year. We also saw lots of changes to our scoring server to support these game changes, resulting in more insight to scores and a massive quality of life improvement for graders (Shout-out to Lucas for the new scoring app!).

Report Rubric Changes

We also redid the rubric for our reports and put out detailed guidance to our scoring team regarding this new rubric. This rubric was designed to let teams stand out in different areas, giving teams more ways to differentiate themselves in their reports. The overall goal with this was to help teams separate themselves more in the final scores, adding range between the reports based on different features. Before implementing this we also ran many models of simulated scores, and saw a ride range of outcomes we really liked. Ultimately this worked well, and we were able to reward teams for unique features in their reports that many other teams did not consider. If you missed it, I did a detailed post earlier this year on writing better pentest reports that was heavily modeled on this new rubric. As we get closer to the finals, keep in mind I did a similar post on the winning presentations from last year.

Wildcard Selections

Finally, our wildcard or at-large scoring was a bit different this year, but ultimately I really like this method of choosing additional finalists. We had both regional judges for every event, and a superset of wildcard judges, who had experience judging across all events. We decided to take the top team from each region for the finals, to provide representation from all regions at our final event. We then took all of the remaining teams and sorted them by the their total scores, taking the next top eight performing teams. In this way we may have more representation from a certain area, but we decided to do it this way so we could potentially get better performing teams at the finals. Our new scoring portal also helped greatly with checking these final scores and all of our scoring team's outstanding tasks. It allowed us to have multiple people double-check these total scores and confirm our team selections for finals.

In Closing

Those are some quick regional and scoring updates. I'm really looking forward to our finals event, where we will attempt a hybrid in person / online event this year. It's already right around the corner, scheduled for 1/6/22 - 1/9/22! What an exciting way to kick off the new year. This is our first CPTC blog post and a cross post from https://lockboxx.blogspot.com.

Previous
Previous

2021 Finals Review