To compare the performance of different algorithms and techniques, research communities have developed several benchmark datasets across different domains. However, tracking the progress in these research areas is not easy, as publications are shown across different venues at the same time, and many of them claim to represent the state-of-the-art. Research communities organise competitions periodically to evaluate the performance of various algorithms and techniques, thereby tracking advancements in the field. However, these competitions pose a significant operational burden. The organisers must manage and evaluate a large volume of submissions. Furthermore, participants typically develop their solutions in diverse environments, leading to compatibility issues during the evaluation of their submissions. This paper presents an online competition system that automates the submission and evaluation process of the competition. The competition system allows organisers to manage large amounts of submissions efficiently, and utilising an isolated environment to evaluate the submissions, which has been successfully used to serve several competitions and applications.