I have research interests in a variety of topics from computer system field, including programming language
design and implementation, optimizing compilers for highly-parallel systems, high-performance
implementation of AI algorithms, parallel algorithms, memory management, computer algebra.
Co-architected the core data-parallel language of Futhark and its optimizing compiler.
Futhark is a purely functional, map-reduce
language supporting (parallel) bulk operation on regular arrays. Futhark currently targets efficient
execution on GPGPUs and seeks a common ground that combines the strengths of the functional and imperative
approaches. Futhark has been used in inter-disciplinary collaboration, e.g., in remote sensing, finance
and image processing, and has generated a nontrivial number of
MSc or BSc theses.
PPC: SC’22, ICPP (’18,’19, ’22), PPoPP (’17, ’19, ’21, ’23), EuroPar’20, PACT (’15, ’16, ’19, ’23),
IPDPS (’18 Track Co-Chair, ’25), FHPC’17 (Co-Chair), ICS (’14, ’23).
Local organizer of PLDI'24 in Copenhagen (with Fritz Henglein).
Academic Awards and Honors (Selected)
DIKU Teacher of the Year Award (2013 and 2015)
Selected Refereed Papers
Cosmin E. Oancea and Stephen M. Watt."GPU Implementations for Midsize Integer Addition and Multiplication".
Accepted for publication in Springer LNCS;
currently available on arXiv
Lotte M. Bruun, Ulrik S. Larsen, Nikolaj H. Hinnerskov and Cosmin E. Oancea."Reverse-Mode AD of Multi-Reduce and Scan in Futhark",
In Procs. of Symposium on Implementation and Application of Functional Languages (IFL'23), 2024.
publication rights licensed to ACM, doiPDF (author copy)
D. Serykh, S. Oehmcke, C. Oancea, D. Masiliunas, Jan Verbesselt, Y. Cheng, S. Horion, F.
Gieseke and N. Hinnerskov."Seasonal-Trend Time Series Decomposition on Graphics Processing Units",
In Procs. of Procs of BigData, 2023.
P. Munksgaard, C. E. Oancea and T. Henriksen."Compiling a functional array language with non-semantic memory information",
In Procs. of Symposium on Implementation and Application of Functional Languages (IFL’22), 2023.
PDF
R.Schenck, O. Rønning, T. Henriksen and C. E. Oancea."AD for an Array Language with Nested Parallelism",
In Procs of Int. Conf. for High Performance Computing, Networking,
Storage and Analysis (SC), 2022.
PDF
P. Munksgaard, T. Henriksen, P. Sadayappan and C. E. Oancea."Memory Optimizations in an Array Language",
In Procs of Int. Conf. for High Performance Computing, Networking,
Storage and Analysis (SC), 2022.
PDF
P. Munksgaard, S. Breddam, T. Henriksen, F. Gieseke and C. E. Oancea."Dataset Sensitive Autotuning of Multi-Versioned Code based on Monotonic Properties",best paper award, 22nd Int. Symposium on Trends in Functional Programming (TFP), 2021.
PDF
W. Pawlak, M. Hlava, M. Metaksov and C. E. Oancea."Acceleration of Lattice Models
for Pricing Portfolios of Fixed-Income Derivatives",
In Procs. of Int. Workshop on Libraries, Languages and
Compilers for Programming (ARRAY), 2021.
PDF
C. E. Oancea, T. Robroek and F. Gieseke."Approximate Nearest-Neighbour Fields via Massively-Parallel Propagation-Assisted KD Trees",
IEEE Int. Conf. on Big Data, special track of Machine Learning and Big Data (MLDB), 2020.
PDF
T. Henriksen, S. Hellfritzsch, P. Sadayappan and C. E. Oancea."Compiling Generalized Histograms for GPU",
International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2020.
PDF
F. Gieseke, S. Rosca, T. Henriksen, J. Verbesselt and C. E. Oancea."Massively-Parallel Change Detection for Satellite Time Series Data with Missing Values",
IEEE 36th International Conference on Data Engineering (ICDE), pages 385-396, 2020
PDF
W. Pawlak, M. Elsman and C. E. Oancea."A Functional Approach to Accelerating Monte Carlo based American Option Pricing",
31st International Symposium on Implementation and Application of Functional Languages (IFL'19). Singapore. September, 2019.
PDF
T. Henriksen, F. Thorøe, M. Elsman and C. E. Oancea."Incremental Flattening for Nested Data Parallelism",
International Symposium on Principles and Practice of Parallel Programming
(PPoPP), pp 53–67, Washington D.C., US, 2019.
PDF
M. Elsman, T. Henriksen, D. Annenkov and C. E. Oancea, "Static Interpretation of Higher-order Modules in Futhark: Functional GPU Programming in the Large",
Proc. ACM Program. Lang. (ICFP’18), pp 97:1–97:30, St. Louis, US, 2018.
PDF
T. Henriksen, M. Elsman and C. E. Oancea. "Modular Acceleration: Tricky Cases of Functional High-Performance Computing",
Procs. of Workshop on Functional High-Performance Computing (FHPC), St. Louis, US, 2018.
PDF
T. Henriksen, N. G. W. Serup, M. Elsman, F. Henglein and C. E. Oancea."Futhark: Purely Functional GPU-programming with Nested Parallelism and In-place Array Updates",
Int. Conf. Programming Languages Design and Implementation (PLDI), Barcelona, Spain, 2017.
PDF
F. Gieseke, C. E. Oancea and C. Igel."Bufferkdtree: A Python library for massive nearest neighbor queries on multi-many-core devices",
Knowledge-Based Systems Journal, 120:13, 2017.
T. Henriksen, M. Dybdal, H. Urms, A. S. Kiehn, D. Gavin, H. Abelskov, M. Elsman and C. E. Oancea."APL on GPUs: A TAIL from the Past, Scribbled in Futhark",
5th Int. Workshop on Functional High Performance Computing (FHPC), Nara, Japan, 2016.
PDF
T. Henriksen, K. F. Larsen and C. E. Oancea."Design and GPGPU Performance of Futhark’s Redomap Construct",
3rd Int. Workshop on Libraries, Languages and Compilers for Programming (ARRAY),
pp. 17-24, Santa Barbara, US, 2016.
PDF
C. Andreetta, V. Begot, J. Berthold, M. Elsman, F. Henglein, T. Henriksen, M. Nordfang and C. E. Oancea."FinPar: A Parallel Financial Benchmark",
ACM Journal Trans. Archit. Code Optim. (TACO), vol. 13(2), pp. 18.1–18.27, 2016.
F. C. Gieseke, C. E. Oance, A. Mahabal, C. Igle and T. Heskes."Bigger buffer k-d trees on multi-many-core systems",
Big Data Deep Learning in High Performance Computing,
Springer, pp. 172-180, 2016.
C. E. Oancea and L. Rauchwerger."Scalable conditional-induction variables (CIV) analysis",
13th IEEE/ACM International Symposium on Code Generation and Optimization (CGO'15),
San Francisco, USA, pp. 213-224, 2015.
PDF
Fabian C. Gieseke, Justin Heinermann, Cosmin E. Oancea and Christian Igel."Buffer k-d trees: processing massive nearest neighbor
queries on GPUs",
31st Int. Conf. on Machine Learning (ICML'14), Beijing, China, 2014.
PDF
Fabian C. Gieseke, Kai L. Polsterer, Cosmin E. Oancea and Christian Igel."Speedy Greedy Feature Selection: Better Redshift
Estimation via Massive Parallelism.",
22nd European Symposium on Artificial Neural Networks (ESANN),
pp. 87-92, Belgium, 2014.
Troels Henriksen, Martin Elsman, and Cosmin E. Oancea. "A Hybrid Approach to Size Inference in Futhark",
3rd ACM-SIGPLAN Workshop on Functional High-Performance Computing (FHPC),
Guthenburg, Sweden, September 2014.
PDF
Troels Henriksen and Cosmin E. Oancea. "Bounds Checking: An Instance of Hybrid Analysis",
ACM SIGPLAN Int. Workshop on Libraries, Languages and Compilers for
Array Programming (ARRAY). Edinburgh, UK, June 2014.
PDF
Troels Henriksen and Cosmin E. Oancea. "A T2 Graph-Reduction Approach To Fusion",
2nd ACM SIGPLAN Workshop on Functional High-Performance Computing.
Boston, Massachusetts. September 2013.
PDF
Cosmin E. Oancea, Christian Andreetta, Jost Berthold,
Alain Frisch, and Fritz Henglein. "Financial software on GPUs: between
Haskell and Fortran.",
1st ACM SIGPLAN workshop on Functional high-performance
computing (FHPC ‘12). Copenhagen 2012.
PDF
Cosmin E. Oancea and Lawrence Rauchwerger. "Logical Inference Techniques for Loop Parallelization",
33rd ACM-SIGPLAN Conf. on Prog. Lang. Design and Implem. (PLDI'12),
pp 509-520, June 2012.
PDF
Cosmin E. Oancea and Lawrence Rauchwerger. "A Hybrid Approach to Proving Memory Reference Monotonicity",
24th Int. Lang. and Compilers for Parallel Computing (LCPC'11),
LNCS, Vol 7146, pp 61-75, Sept 2013.
PDF
Cosmin E. Oancea and Stephen M. Watt. "An Architecture for Generic Extensions",
Science of Computer Programming Journal, Elsevier, Vol 76(4),
pp 258-277, doi:10.1016/j.scico.2009.09.008, 2011.
Cosmin E. Oancea, Alan Mycroft and Tim Harris.
"A Lightweight In-Place Implementation for
Software Thread-Level Speculation",
21st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA'09),
August 2009, Calgary, Canada.
PDF
Cosmin E. Oancea, Alan Mycroft and Stephen M. Watt.
"A New Approach to Parallelising Tracing Algorithms",
International Symposium on Memory Management (ISMM'09),
June 2009, Dublin, Ireland.
PDF
Cosmin E. Oancea and Alan Mycroft.
"Set-Congruence Dynamic Analysis for Software Thread-Level Speculation",
Languages and Compilers for Parallel Computing (LCPC'08) 21st Annual Workshop,
August 2008, Edmonton, Canada.
PDF
Cosmin E. Oancea and Alan Mycroft.
"Software Thread-Level Speculation: an Optimistic Library Implementation",
International Workshop on Multicore Software Engineering (IWMSE'08),
pp 23-32 (ACM Digital Library), May 2008, Leipzig, Germany.
PDF
Cosmin E. Oancea and Stephen M. Watt.
"Generic Library Extension in a Heterogeneous Environment",
Library-Centric Software Design LCSD'06,
pp. 25-35, October 2006, Portland, USA.
PDF
Cosmin E. Oancea.
"Parametric Polymorphism for Software Component Architectures and Related Optimizations",
PhD Thesis, Department of Computer Science, The University of Western Ontario,
Advisor: Stephen M. Watt, July 2005, London, ON, Canada.
PDF
Cosmin E. Oancea and Stephen M. Watt.
"Parametric Polymorphism for Software Component Architectures",
ACM Object-Oriented Programming, Systems, Languages and Applications OOPSLA'05,
pp. 147 - 166, October 2005, San Diego, USA.
PDF
Cosmin E. Oancea and Stephen M. Watt.
"Domains and Expressions: An Interface Between Two Approaches to Computer Algebra,"
ACM International Symposium on Symbolic and Algebraic Computation ISSAC'05,
pp. 261 - 269, July 2005, Beijing, China.
PDF
Cosmin E. Oancea, Jason W. Selby, Mark Giesbrecht, and Stephen M. Watt.
"Distributed Models of Thread Level Speculation",
International Conference on Parallel and Distributed Processing Techniques and
Applications PDPTA'05,
pp. 920-927, June 2005, Las Vegas, USA.
PDF
Cosmin E. Oancea, Clare So, and Stephen M. Watt.
"Generalization in Maple",
Maple Conference,
pp. 277-382, July 2005, Waterloo, Canada.
Yannis Chicha, Michael Lloyd, Cosmin E. Oancea, and Stephen M. Watt.
"Parametric Polymorphism for Computer Algebra Software Components",
6th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing SYNASC'04,
pp. 119 - 130, Sept. 2004, Timisoara, Romania.
PDF
Cosmin E. Oancea and Stephen M. Watt.
"A Framework for Using Aldor Libraries with Maple",
EACA'04 satellite conference to the ISSAC'04,
pp. 219 - 224, June 2004, Universidad de Santander, Spain.
PDF
Selected Supervised Students
Troels Henriksen."Exploiting functional invariants to optimise parallelism: a dataflow approach",
MSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, February 2014.
PDF
Troels Henriksen."Design and Implementation of the Futhark Programming Language",
PhD Thesis, Department of Computer Science (DIKU), University of Copenhagen, December 2017.
PDF
Niels G. W. Serup."Memory Block Merging in Futhark",
MSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, November 2017.
PDF
Niels G. W. Serup."Extending Futhark with a write construct",
MSc Project, Department of Computer Science (DIKU), University of Copenhagen, June 2016.
PDF
Rasmus Wriedt Larsen."Generating Efficient Code for Futhark's Segmented Redomap",
MSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, March 2017.
PDF
Mette Marie Kowalski."Designing and Accelerating a Generic FFT Library in Futhark",
BSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, June 2018.
PDF
Mikkel Storgaard Knudsen."FShark: Futhark programming in FSharp",
MSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, August 2018.
PDF
Marek Hlava and Martin Metaksov."Accelerated Interest Rate Option Pricing using Trinomial Trees",
MSc Thesis, Department of Computer Science (DIKU), University of Copenhagen, August 2018.
PDF
In the imperative context, entirely static analysis has been successfully applied to (auto-) parallelizing loops with simple control flow and affine array indexing, but is often conservative when these assumptions do not hold. On the other hand, entirely dynamic analysis of memory-reference traces, e.g., thread-level speculation, can be applied aggressively, but incurs significant (and often non-scalable) overheads.
In collaboration with Prof. Lawrence Rauchwerger, I have studied how to unify static and dynamic analysis into a hybrid compiler framework for the Fortran77 language, which succeeds in detecting and optimizing parallelism for a large class of difficult loops at negligible (or in the worst case scalable) runtime overheads.
The idea has been to use static, interprocedural analysis (i) to model loop parallelism as an equation on abstract sets (of array references), and furthermore (ii) to extract a cascade of sufficient conditions for parallelism, of increasing time complexity, that are verified at runtime until one succeeds. An evaluation of ~30 benchmarks from SPEC and Perfect-Club suites (~1000 loops) demonstrates that the approach is viable and outperforms commercial compilers by a significant margin.
In collaboration with Prof. Alan Mycroft, I have studied topics related to dynamic extraction and optimization of parallelism. In particular:
Thread-Level Speculation (TLS) is a parallelization technique that allows
loop iterations to be executed (optimistically) in parallel even in the
presence of cross-iteration dependencies, which are tracked and fixed at
runtime.
Related work on software TLS has studied
the case when one (overarching) TLS implementation is used to
disambiguate dependencies for the whole memory footprint of the
application. In this setting good speedup is typically achieved
when enough accesses are disambiguated statically by the compiler and only
a few other requires TLS support.
Our software-TLS work has studied in some sense the dual
problem that addresses a max-min performance question:
``What speedup can TLS achieve when all accesses require TLS
disambiguation?''
In this setting different TLS implementations
disambiguate disjoint memory partitions and each TLS implementation
is tuned to take advantage of the access patterns of the variables
that inhabit the corresponding memory partition.
More information and resources related to this work, in particular the
PolyLibTLS library, can be found here.
Tracing algorithms are typically parallelized by associating
the working queue a processor-centric semantics in that
the working queue is split across processors, and each holds
elements that are to be processed by its corresponding processor.
We have proposed a parallelization technique, in which the
working queue has a memory-centric semantics in that
the memory is over-partitioned and a working-queue is associated
to each partition and holds addresses that belong to that partition.
This enhances locality of reference, but more importantly,
it eliminates locking from the critical path of the algorithm.
In collaboration with Fabian Gieske and Christian Igle, we have used
similar techniques to improve locality-of-reference for a GPGPU
implementation of kd-trees, named Buffer kd-tree. This was
applied to solve a big-data nearest-neighbor problem, in the
context of classifying celestial bodies into galaxies and starts.
As a PhD student, advised by Prof. Stephen M. Watt, I have studied various aspects related to language interoperability.
The original motivation for my research was the observation that although parametric polymorphism was already mainstream,
software component architectures of that time, such as CORBA, JNI, DCOM, were lagging behind the advances in common programming practice and were not supporting parametric polymorphism across languages boundaries.
In this context, I investigated how to resolve different binding times and parametrization semantics in a range of representative languages, such as C++, Java, Aldor, Maple, and have identified a common ground that could be suitable mapped to different language bindings. This work has resulted into two frameworks: Alma that allows interoperability between two very different
computer algebra systems (Aldor and Maple), and GIDL (Generic Interface Definition Language), a more systematic solution for
parametric polymorphism that is designed as a generic extension framework, that can be easily adapted to work on top of various component architectures.