What does one mean by interpretability in AI? It can be defined as the ability to describe AI models in terms that humans could easily understand. By comprehending the AI models in a better way, i.e. understanding the logic behind certain predictions made by the models, you can begin understanding complex questions regarding their internal operations.
It will also help us understand whether the models could be trusted and held accountable and determine whether they have made a fair decision.
What is Captum?
Python has many useful libraries for handling various tasks. For example, model interpretability has a library called Captum, which is a subset of PyTorch and can be integrated with it. So, what does Captum offer? Captum has multiple attribution algorithms that help comprehend input features, concealed neurons and layers. The word, Captum, means to understand in Latin and is a root word for understanding.
Which attribution algorithms are available in Captum?
The attribution algorithms are the backbone of Captum and help to formulate most of its processes. These attribution algorithms can be categorised into three parts –
- Primary Attribution: It helps evaluate the contribution of every input feature of the model to the output. It is calculated with the help of integrated gradients.
- Layer Attribution: These attribution algorithms help evaluate the contribution of every neuron in a specified layer to the model's output.
- Neuron Attribution: This specific algorithm helps evaluate every input feature's contribution to a specified secret neuron.
These are the three attribution algorithms that are available in Captum. These algorithms use Shapley values or SHAPs for interpretation of the models.
How to make visualisation with the help of Captum?
By using all the parameters of the model, visualisation can be done in 7 ways –
- Heat mapping: A heat map is made by interpreting and representing the data as a coloured graphical model.
- Blended heat mapping: In this type of visualisation, the heat map is overlayed over the greyscale version of the image that was originally generated.
- Colour mapping: Specific colours are adopted for the heat map for the various values obtained in the original model. The negative signs are shown in red. The Green colour is used to represent the positive values, whereas the Blue colour shows the absolute values.
- Graphical plotting: Using Matplotlib or NumPy, various graphs are generated for these
- Masking: It represents the data in the way of masked values by normalising the pixel values by multiplying them with the number of pixels they have covered.
- Alpha-scaling: In this type of visualisation, the alpha channels for each pixel are held equal to the normalised pixel values that are obtained while masking the data.
- Mathematical signs: The data is interpreted as a collection of positive and negative attributes giving only the pixel values for these attributions.
What are the uses of Captum?
The uses of Captum are as follows:
- They help in the decoding of images, texts, audio and videos.
- Creating new interpretations from the base files and making new versions out of those.
- Captum provides a foundation of state-of-the-art algorithms, which includes the popular Integrated Gradients module. It helps researchers, developers and engineers determine which features contribute to the model's output.
- Captum helps ML researchers interact with PyTorch models and form advanced data processing. This is why Facebook and other social platforms use Captum. It can also be used to block objectionable data by analysing the content.
To conclude, this write-up has helped you understand the concept of Captum and the interpretability model through PyTorch. If you want to learn more about such topics, visit the blog section of E2E Networks.
Reference Links
https://captum.ai/docs/introduction
https://developers.facebook.com/blog/post/2021/07/06/eli5-captum-easily-interpret-ai-models/