Sunday, July 17, 2022
HomeITWhat's Google JAX? NumPy on accelerators

What’s Google JAX? NumPy on accelerators


Among the many improvements that energy the favored open supply TensorFlow machine studying platform are automated differentiation (Autograd) and the XLA (Accelerated Linear Algebra) optimizing compiler for deep studying.

Google JAX is one other venture that brings collectively these two applied sciences, and it affords appreciable advantages for pace and efficiency. When run on GPUs or TPUs, JAX can change different packages that decision NumPy, however its packages run a lot sooner. Moreover, utilizing JAX for neural networks could make including new performance a lot simpler than increasing a bigger framework like TensorFlow.

This text introduces Google JAX, together with an summary of its advantages and limitations, set up directions, and a primary take a look at the Google JAX quickstart on Colab.

What’s Autograd?

Autograd is an automated differentiation engine that started off as a analysis venture in Ryan Adams’ Harvard Clever Probabilistic Techniques Group. As of this writing, the engine is being maintained however now not actively developed. As an alternative, its builders are engaged on Google JAX, which mixes Autograd with further options akin to XLA JIT compilation. The Autograd engine can robotically differentiate native Python and NumPy code. Its major meant utility is gradient-based optimization.

TensorFlow’s tf.GradientTape API is predicated on comparable concepts to Autograd, however its implementation shouldn’t be an identical. Autograd is written completely in Python and computes the gradient immediately from the operate, whereas TensorFlow’s gradient tape performance is written in C++ with a skinny Python wrapper. TensorFlow makes use of back-propagation to compute variations in loss, estimate the gradient of the loss, and predict the perfect subsequent step.

What’s XLA?

XLA is a domain-specific compiler for linear algebra developed by TensorFlow. In response to the TensorFlow documentation, XLA can speed up TensorFlow fashions with probably no supply code adjustments, enhancing pace and reminiscence utilization. One instance is a 2020 Google BERT MLPerf benchmark submission, the place 8 Volta V100 GPUs utilizing XLA achieved a ~7x efficiency enchancment and ~5x batch measurement enchancment.

Copyright © 2022 IDG Communications, Inc.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments