Deploying eco-friendly processor architectures to power future-generation AI
Trusted by industry leaders
The benefits of AI
Around the world, artificial intelligence is rapidly becoming technology’s most important priority, enabling new services such as robotics and autonomous vehicles, and enhancing existing operations like telemedicine and e-learning. Governments, organizations and individuals all stand to gain from these immense benefits of AI.
The future of AI is very bright. However, challenges abound with respect to deploying AI in a sustainable fashion.
Computing power
AI requires a lot of computing power, which increases the energy demand. Predictions and particularly training often use the entire CPU of a machine – in the latter case, for an extended period of time.
AI model size
The size of AI model is growing at a rapid pace – driven by the broad application of AI in everyday use that is demanding increasing model complexity.
Efficient computing systems
To keep enjoying the benefits of AI while not negatively impacting our climate, we need computing systems that are efficient at crunching AI models.
Our opportunities
Research opportunities
We welcome university faculty and researchers interested in collaborations in the area of hardware/software co-optimization for sustainability.
Hardware/Software co-design for sustainable computing
We’ve outlined plans for three key areas in which we are committing research and development resources. These areas align with our own passions and ability as well as remain top priorities for society.
Algorithmic optimization
Most AI models employ dense matrix operations – these require increased computing density, unlike sparse computations that can offer improved energy efficiency. Based on these principles we design tools to achieve more energy-efficient AI computing. Our approach involves 3 levels of optimization: model-level, kernel-level and system-level.
Chip-level innovation
With models optimized at the algorithmic level, hardware must evolve accordingly. We perform hardware re-designs simultaneously with algorithmic level optimizations while optimizing for energy efficiency and chip re-use. This flexibility allows us to use a single processor architecture for multiple workloads, recycling our hardware real-estate, reducing e-waste and improving circularity.