UChicago Trading Competition 2025 - Case 1¶
Written by Ephram Cukier¶
This past weekend, I competed in the 13th annual UChicago Trading competition, a very rigorous competition with 40 teams composed of students mainly from UChicago, Stanford, MIT, and the Ivies. My team (CU Boulder) earned second overall, with third place in Case 1 and fourth place in Case 2. This is an explanation of how I arrived at the Case 1 solution. You can read about my Case 2 solution here
Understanding the Problem¶
In this case, there was a live-trading simulation with five
assets. There were three stocks for companies: a low-cap stock
MKJ
, a mid-cap stock DLR
, and a
high-cap stock APT
. Then there was an ETF
AKAB
tracking the price of these three assets, and
an ETF AKIM
which tracks the daily inverse
movements of AKAB
. The price of the three companies
were influenced by different types of news events.
APT
released earnings twice per trading day, and
per the case packet has a constant P/E ratio of 10.
DLR
was a construction company trying to get a big
government contract. To get the contract, they needed a petition
to pass with 100,000 signatures. Five times per day, there would
be a news event containing the number of signatures on the
contract, with the number of new signatories drawn from a
lognormal distribution. If the petition passes, the asset would
have a value of 10,000. If the petition failed, the asset would
go to 0. Originally, the parameters of the lognormal function
were not shared. Although it would be quite easy to leave a bot
watching the exchange for a few hours to estimate the
parameters, the competition organizers decided to release the
parameters before we needed to do this.
MKJ
released text based news events throughout the
day, with no given reason behind how the market would respond.
The scoring system was also designed to heavily punish volatility. Each round, teams were ranked by PnL, then given points equal to $(n-1)^2$, where n is their position in the ranking (fewer points wins). First and thirds place would receive 0 and 4 points, but 21st place would receive 400. Essentially, you wanted to perform consistently, even if that meant your average PNL would be slightly lower. The rounds were then also weighted by an unknown factor to have later rounds count for more (had this weighting not happened, or had it been less, we would have received second in this case and first place overall).
Finally, there were strict risk-limits that would not let you hold more than 200 in a position at any given time.
I wanted to better understand how these markets responded to the news, so I needed some form of live graphing. Luckily, for my robotics team I am building a live graphing dashboard, Waggle!. Waggle currently only had C++ and Rust libraries, but creating a Python client was incredibly trivial.
import json
import time
import requests
import numpy as np
from typing import Dict, List, Any, Optional, Union, Tuple
class WaggleClient:
def __init__(self, server_url="http://localhost:3000", rate_limit = 0):
self.server_url = server_url
self.batch_endpoint = f"{server_url}/batch"
self.data_buffer = {
"sent_timestamp": 0,
"images": {},
"graph_data": {},
"string_data": {},
}
self.rate_limit = rate_limit
self.last_sent = 0
def add_graph_point(self, graph_name: str, x: float, y: float,
clear_data: bool = False) -> None:
if graph_name not in self.data_buffer["graph_data"]:
self.data_buffer["graph_data"][graph_name] = []
point = {"x": x, "y": y}
if clear_data:
point["settings"] = {"clear_data": True}
self.data_buffer["graph_data"][graph_name].append(point)
def add_graph_points(self, graph_name: str, points: List[Dict[str, float]],
clear_data: bool = False) -> None:
"""Add multiple points to a graph at once."""
if clear_data and points:
if "settings" not in points[0]:
points[0]["settings"] = {}
points[0]["settings"]["clear_data"] = True
if graph_name not in self.data_buffer["graph_data"]:
self.data_buffer["graph_data"][graph_name] = []
self.data_buffer["graph_data"][graph_name].extend(points)
def set_string_data(self, key: str, value) -> None:
value = str(value)
self.data_buffer["string_data"][key] = {"value": value}
def update_robot_position(self, x: float, y: float, heading: float) -> None:
self.data_buffer["robot_position"] = {
"x": x,
"y": y,
"heading": heading
}
def add_image(self, name: str, image_data: str, scale: float = 1.0, flip: bool = False) -> None:
self.data_buffer["images"][name] = {
"image_data": image_data,
"scale": scale,
"flip": flip
}
def clear_all(self) -> None:
self.data_buffer = {
"sent_timestamp": 0,
"images": {},
"graph_data": {},
"string_data": {},
"robot_position": None
}
def send_data(self, timeout=0.15) -> bool:
if time.time() - self.last_sent < self.rate_limit:
return
self.last_sent = time.time()
self.data_buffer["sent_timestamp"] = time.time()
data = json.dumps(self.data_buffer)
try:
response = requests.post(url=self.batch_endpoint, data=data, timeout=timeout)
success = response.status_code == 200
if not success:
return
self.clear_all()
return success
except Exception as e:
return False
if __name__ == "__main__":
client = WaggleClient()
client.add_graph_point("Example Graph", 1, 5)
client.add_graph_point("Example Graph", 2, 7)
client.add_graph_point("Example Graph", 3, 9)
client.set_string_data("mode", "AUTONOMOUS")
client.set_string_data("status", "All systems nominal")
client.send_data()
client.send_data()
By the day of the competition, we were just throwing every piece of information we had onto the dashboard

But what we saw at the beginning was fascinating. The market responded to news events about a 0.5-1.5 seconds after we received the news event. This meant we could act similar to high frequency traders and simply move faster than everyone. Eventually, we narrowed our strategy down to four components.
- Trade on news events faster than the other bots
- Use the ETFs to triple our position on an asset to get around the risk limits
-
When the
AKAV
ETF is overvalued/undervalued, place swap orders to trade on the arbitrage - Be incredibly ready to deal with the issues that arise at the competition
For this strategy, we wanted to enter a huge position, and exit a matter of seconds later after the market has updated. Our goal was to maintain a position of 0 when not actively responding to a news event.
Staying Adaptable¶
To be ready for everything the competition could throw at us, we did a couple of things. Although we wanted to take on the largest position we could, sometimes market conditions would make that a losing strategy. For our strategy to work, we needed to take on a position very quickly, meaning relying on market orders instead of limit orders. Market orders resolve faster than limit orders, but at the price of the market. And if there is not much volume in the market, we risk wiping out the order book and trading at very disadvantageous prices. To counteract this, we had the ability to shut off different aspects of our trading strategy, as well as change the size of the positions we take on.
DLR_NEWS_ENABLED = True
APT_NEWS_ENABLED = True
MKJ_NEWS_ENABLED = False
AKAV_ARB_BUY_ENABLED = True
AKAV_ARB_SHORT_ENABLED = True
AKIM_NEWS_ENABLED = True
DLR_BUY_QTY = 40
DLR_BUY_MULT = 5
APT_BUY_QTY = 40
APT_BUY_MULT = 5
AKAV_BUY_QTY = 40
AKAV_BUY_MULT = 5
AKIM_BUY_QTY = 40
AKIM_BUY_MULT = 5
# You can only trade up to 40 units per order, despite being able to hold a position of 200. So, we just traded 40 5 times.
We also had code to close our position on start-up in case of a bot crash or something going wrong. When closing our position, we sold only a small amount of each stock on a delay to avoid triggering any market shocks or ruining the order book. Originally, we closed most of our positions with limit orders. However, the competition library would not fetch old orders from the server, so on restart we would not be able to see what orders we had out leading to 'phantom orders.' Although this could have been solved by saving the orders locally to a file, under the intense time pressure of the live-trading I instead opted to make all of our positions close using the slow close algorithm.
async def slow_close(self, assets, max_qty=30, cancel_existing=True):
for asset in assets:
if asset not in self.trade_locks:
self.trade_locks[asset] = asyncio.Lock() #Used locks because we had many async tasks running at once. For most teams this was not necessary
new_assets = []
for asset in assets:
if not self.trade_locks[asset].locked():
print(f'Trying lock for {asset}')
await self.trade_locks[asset].acquire()
new_assets.append(asset)
hit = True
async def cancel(symbol):
ids = []
for id, order in self.open_orders.items():
if order[0].symbol == symbol:
ids.append(id)
for id in ids:
await self.cancel_order(id)
return len(ids) > 0
while hit:
hit = False
for asset in new_assets:
if await cancel(asset):
pass
# await asyncio.sleep(0.45)
qty = self.positions.get(asset, 0)
if qty > 0:
hit = True
print(f'Placing {asset} slow sell {min(max_qty, qty)}')
await self.place_order(asset, min(max_qty, qty), xchange_client.Side.SELL)
elif qty < 0:
hit = True
print(f'Placing {asset} slow buy {min(max_qty, -qty)}')
await self.place_order(asset, min(max_qty, -qty), xchange_client.Side.BUY)
await asyncio.sleep(0.25)
for asset in new_assets:
print(f'Releasing lock for {asset}')
self.trade_locks[asset].release()
def get_lowest_ask(self, symbol: str):
lowest_ask = DEFAULT_LOWEST_ASK
for price, qty in reversed(self.order_books[symbol].asks.items()):
if qty != 0:
if price < lowest_ask :
lowest_ask = price
return lowest_ask
def get_highest_bid(self, symbol:str):
highest_bid = 0
for price, qty in self.order_books[symbol].bids.items():
if qty != 0:
if price > highest_bid:
highest_bid = price
return highest_bid
There were other random things that popped up during the competition, such as the competition server not sending positions of 0 so the library's default dictionary was not filled (we iterated over this dictionary as a list of all assets so this was a problem, for us). So at startup we inited the default dictionary with all the asset names. I also had to edit the provided competition library to fix an error in their code that affected order ids when a bot restarted. For each challenge that arose at the competition, we kept focused and made the necessary changes.
ETF Arbitrage¶
ETF arbitrage is a relatively simple concept. The ETF is a collection of three stocks. So if the ETF is trading at a lower price than those three stocks are worth, we should buy the ETF and redeem it for the other three stocks. If the ETF is trading at a higher price, we should buy the three stocks and redeem them for the ETF.
async def check_etf_arb(self):
await asyncio.sleep(3)
while True:
if time.time() - self.last_news < 3:
#too soon after news
asyncio.sleep(1)
continue
qty = AKAV_BUY_QTY
mkj_ask = self.get_lowest_ask("MKJ")
apt_ask = self.get_lowest_ask("APT")
dlr_ask = self.get_lowest_ask("DLR")
mkj_bid = self.get_highest_bid("MKJ")
apt_bid = self.get_highest_bid("APT")
dlr_bid = self.get_highest_bid("DLR")
sum_of_fairs_ask = mkj_ask + apt_ask + dlr_ask
sum_of_fairs_bid = mkj_bid + apt_bid + dlr_bid
akav_ask = self.get_lowest_ask("AKAV")
akav_bid = self.get_highest_bid("AKAV")
# AKAV is undervalued -> Short
if akav_ask < sum_of_fairs_bid - PCT * akav_ask and AKAV_ARB_BUY_ENABLED:
start_pnl = pnl
print("AKAV ARB BUYING AKAV")
for _ in range(AKAV_BUY_MULT):
await self.place_order("AKAV", qty, xchange_client.Side.BUY)
await asyncio.sleep(0.35) #delay to allow the exchange to update with the order completion (await place_order awaits the placing of the order not its completion)
for _ in range(AKAV_BUY_MULT):
await self.place_swap_order("fromAKAV", qty)
await asyncio.sleep(0.35)
await self.slow_close(['AKAV', 'APT', 'DLR', 'MKJ'])
end_pnl = pnl
self.report_result("AKAV Arb Buy", end_pnl - start_pnl)
# AKAV is overpriced -> Short
if akav_bid > sum_of_fairs_ask + PCT * akav_bid and AKAV_ARB_SHORT_ENABLED:
print("AKAV ARB SHORTING AKAV")
start_pnl = pnl
for _ in range(AKAV_BUY_MULT):
await self.place_order("AKAV", qty, xchange_client.Side.SELL)
await asyncio.sleep(0.35)
for _ in range(AKAV_BUY_MULT):
await self.place_swap_order("toAKAV", qty)
await asyncio.sleep(0.25)
await self.slow_close(['AKAV', 'APT', 'DLR', 'MKJ'])
end_pnl = pnl
self.report_result("AKAV Arb Sell", end_pnl - start_pnl)
await asyncio.sleep(0.1)
This block of code was responsible for about half the money we made during the practice rounds, triggering the organizers to ask us what we were doing that was making so much money :D. That said, during the actual competition we had to shut it off because it was behaving irregularly, and there were other things to fix.
Retrospectively, it likely was other competitors with faster bots getting in ahead of us. This code has two delays, leading to slow ETF arb. We should have been "closing our position" before we necessarily had the ETFs to actually trade with, and relied on the swaps working.
News Handling¶
Here is the basic code we had in the callback
async def bot_handle_news(self, news_release: dict):
self.last_news = time.time()
timestamp = news_release["timestamp"] # This is in exchange ticks not ISO or Epoch
news_type = news_release['kind']
news_data = news_release["new_data"]
if news_type == "structured":
subtype = news_data["structured_subtype"]
symb = news_data["asset"]
if subtype == "earnings":
if APT_NEWS_ENABLED:
asyncio.create_task(self.handle_apt_news(news_data))
else:
if DLR_NEWS_ENABLED:
asyncio.create_task(self.handle_dlr_news(news_release, timestamp, news_data))
else:
self.mkj_last_news = time.time()
if MKJ_NEWS_ENABLED:
asyncio.create_task(self.handle_mkj_news(news_data))
if AKIM_NEWS_ENABLED: #For tripping our position using AKIM and AKAV
asyncio.create_task(self.akim_trading())
APT¶
APT
handling was very simple, as the fair value was
essentially given to us in the earnings. Since we are always
placing market orders and want to be incredibly risk adverse, we
check our prices using the worst piece of the spread.
async def handle_apt_news(self, news_data):
earnings = news_data["value"]
waggle.set_string_data("APT News", news_data)
target_price = earnings * 10
lowest_apt_ask = self.get_lowest_ask("APT")
highest_apt_bid = self.get_highest_bid("APT")
buy_thresh = 20
qty = APT_BUY_QTY
waggle.set_string_data("APT Time", time.ctime(time.time()))
if lowest_apt_ask + buy_thresh < target_price:
for _ in range(APT_BUY_MULT):
await self.place_order('APT', qty, xchange_client.Side.BUY)
self.apt_trade = 'BUY'
waggle.set_string_data("APT Decision", "Buying")
elif highest_apt_bid - buy_thresh > target_price:
for _ in range(APT_BUY_MULT):
await self.place_order('APT', qty, xchange_client.Side.SELL)
self.apt_trade = 'SHORT'
waggle.set_string_data("APT Decision", "Shorting")
else:
waggle.set_string_data("APT Decision", "No trade made")
await asyncio.sleep(5)
await self.slow_close(['APT'])
DLR¶
DLR
was a little more tough. Because of the
lognormal signatures, we needed to figure out how many news
events were left in the round.
def get_events_remaining(self, timestamp):
ticks_per_second = 5
ticks_per_round = 15*60*ticks_per_second
ticks_remaining = ticks_per_round - timestamp
ticks_per_day = 90*ticks_per_second
seconds_remaining = ticks_remaining/ticks_per_second
waggle.set_string_data("time_remaining", f"{int(seconds_remaining//60)}:{int(seconds_remaining%60)}")
whole_days_remaining = ticks_remaining // ticks_per_day
ticks_in_this_day_remaining = ticks_remaining % ticks_per_day
events_in_this_day_remaining = min((ticks_in_this_day_remaining - 1) // (15 * ticks_per_second), 4)
news_events = whole_days_remaining*5 + events_in_this_day_remaining + 1 #+1 for current event that we're dealing with
waggle.set_string_data("News Events Remaining", news_events)
waggle.set_string_data("events_in_this_day_remaining", events_in_this_day_remaining)
waggle.set_string_data("ticks_remaining", ticks_remaining)
return news_events
async def handle_dlr_news(self, news_release, timestamp, news_data):
new_signatures = news_data["new_signatures"]
cumulative = news_data["cumulative"]
news_events_remaining = self.get_events_remaining(timestamp)
percent_chance, predicted_ending = monte_carlo_logdist(S0 = cumulative, num_events=news_events_remaining, target=100_000)
waggle.add_graph_point("DLR Percent Chance", time.time(), percent_chance)
waggle.add_graph_point("DLR Predicted Ending", time.time(), predicted_ending)
target_price = percent_chance * 100
waggle.add_graph_point("DLR Predicted", time.time(),target_price)
waggle.add_graph_point("DLR Signatures", time.time(), cumulative)
waggle.set_string_data("DLR Time", time.ctime(time.time()))
waggle.set_string_data("DLR News", news_release)
buy_thresh = 10
qty = DLR_BUY_QTY
waggle.set_string_data("DLR Decision", "No trade made")
waggle.set_string_data("DLR Decision", "No trade made")
lowest_dlr_ask = self.get_lowest_ask("DLR")
highest_dlr_bid = self.get_highest_bid("DLR")
if lowest_dlr_ask + buy_thresh < target_price:
for _ in range(DLR_BUY_MULT):
await self.place_order('DLR', qty, xchange_client.Side.BUY)
await self.place_order('DLR', qty, xchange_client.Side.SELL, int(target_price) -buy_thresh//2)
self.dlr_trade = 'BUY'
waggle.set_string_data("DLR Decision", "Buying")
trade_made = True
#buy
pass
if highest_dlr_bid - buy_thresh > target_price:
for _ in range(DLR_BUY_MULT):
await self.place_order('DLR', qty, xchange_client.Side.SELL)
await self.place_order('DLR', qty, xchange_client.Side.BUY, int(target_price)+buy_thresh//2)
self.dlr_trade = 'SHORT'
waggle.set_string_data("DLR Decision", "Shorting")
trade_made = True
await asyncio.sleep(3)
await self.slow_close(['DLR'])
def monte_carlo_logdist(S0,num_events, target=100_000, num_trials = 1000, alpha = 1.0630449594499, sigma = 0.006):
results = np.zeros(num_trials)
for r in range(num_trials):
S = np.zeros(num_events)
S[0] = S0
for i in range(1, num_events):
mean = np.log(alpha) + np.log(S[i-1])
S[i] = np.random.lognormal(mean, sigma)
results[r] = S[-1]
ans = np.mean(results > target) * 100
return ans, np.mean(results)
Retrospectively, we should have pre-computed the monte-carlo distribution so we could enter positions even faster, but given the size of the simulation this only slowed us down by a few milliseconds. Although that could be a lot in real HFT, in this environment it made no difference (in a real HFT scenario you likely wouldn't be using python anyway).
Increasing Our Position with ETFs¶
The logic here was simple. We were only allowed to trade a
certain amount of each asset. However, for DLR
and
APT
we were essentially guaranteed the direction
each asset would move, so we could buy/short both ETFs to triple
the position we were taking on. This let us not only take on
greater positions normally, but also still trade large when
volume in DLR
or APT
was low.
async def etf_increase_position(self):
qty = AKIM_BUY_QTY
if time.time() - self.mkj_last_news < 3:
await asyncio.sleep(0.5)
return
await asyncio.sleep(0.15)
if self.apt_trade == 'BUY' or self.dlr_trade == 'BUY':
if (self.apt_trade == 'BUY' or self.apt_trade == None) and (self.dlr_trade == 'BUY' or self.dlr_trade == None):
for _ in range(AKIM_BUY_MULT):
await self.place_order('AKIM', qty, xchange_client.Side.SELL)
await self.place_order('AKAV', qty, xchange_client.Side.BUY)
await asyncio.sleep(1)
await self.slow_close(['AKAV', 'AKIM'])
elif self.apt_trade == 'SHORT' or self.dlr_trade == 'SHORT':
if (self.apt_trade == 'SHORT' or self.apt_trade == None) and (self.dlr_trade == 'SHORT' or self.dlr_trade == None):
for _ in range(AKIM_BUY_MULT):
await self.place_order('AKIM', qty, xchange_client.Side.BUY)
await self.place_order('AKAV', qty, xchange_client.Side.SELL)
await self.slow_close(['AKAV', 'AKIM'])
self.apt_trade = None
self.dlr_trade = None
Final Thoughts¶
This challenge was really fun. I had a great time tuning this strategy to squeeze out any edge that was possible. I do wish we had the time to test our strategy a little more rigorously, especially the ETF arbitrage. However, I am incredibly happy about my and my team's performance in this case and at the competition as a whole.