A detailed talked about how to take advantage of the new AI/ML tools that Google has just recently announced while building day to day web applications.
4. What is MediaPipe
● Created by Google in 2012
● To be used for processing Youtube videos for compliance
● Eventually expanded to other Google products like Google Home
● First announced publicly in 2019
● It is a cross-platform, open source framework
● With a low-code API
6. Using MediaPipe in your web apps
Demo app: https://gdg-warri.netlify.app/
Description: A movie rating website, which uses Sentiment Analysis a branch of
Text Classification to categorize text with predefined labels, and in this case
“positive” or “negative”.
Give your review about a movie to try it out!
7. Text Classification
Text classification is an NLP task where algorithms automatically assign
predefined categories or labels to text documents based on their content.
8. Creating a Text Classifier in 3 steps
1. Install the @mediapipe/tasks-text package
2. Load WebAssembly files needed for the classification
3. Instantiate the TextClassifier with custom options
11. modelAssetPath: Path to the model our classification is going to run on.
maxResults: The number of top results to return.
scoreThreshold: Percentage at which results should be returned
12. Text Classification models
● BERT-classifier model (recommended for performance)
● Average word embedding model
● …and yours!
Learn more: https://developers.google.com/mediapipe/solutions/text/text_classifier#models
14. Connect with me
Samuel Olaegbe
Twitter: @devloader
Github: @goodhands
LinkedIn: Samuel Olaegbe
Telegram: https://t.me/samuelolaegbe
Blog: https://devloader.hashnode.dev/
Notes de l'éditeur
Facial detection
Object identification
Gesture detection
Facial posture
Face Landmark Detection
Text classification
Language detection
You could use the remote URL to the model instead: https://storage.googleapis.com/mediapipe-models/text_classifier/bert_classifier/float32/latest/bert_classifier.tflite
BERT-classifier model; This model uses a BERT-based architecture (specifically, the MobileBERT model) and is recommended because of its high accuracy. It contains metadata that allows the task to perform out-of-graph BERT tokenization. - Bidirectional Encoder Representations from Transformers
Average word embedding model: This model uses an average word-embedding architecture. This model offers a smaller model size and lower latency at the cost of a lower prediction accuracy compared to the BERT-classifier. Customizing this model through additional training is also faster than doing training of the BERT-based classifier. This model contains metadata that allows the task to perform out-of-graph regex tokenization.