SlideShare une entreprise Scribd logo
1  sur  50
Télécharger pour lire hors ligne
Aug/26/2019
Deep Learning based
Drug Discovery
Bonggun Shin
1
Aug/26/2019
/50
Outline
• Problem definition

• Drug discovery process

• Drug target interaction (DTI)

• Background

• Sequence data in DTI

• Recent trends in word embeddings

• Previous SOTA in DTI

• Molecule transformer
!2
Aug/26/2019
/50
Drug Discovery Process
Target
Identification
Molecule
Discovery
Molecule
Optimization
Clinical
Test
FDA
Approval
Repurposing Generating
• Green - Physical or computer based (in-silico) experiments

• Yellow - Animal and human experiments
!3
Aug/26/2019
/50
Drug Repurposing
• Safe - already approved drugs

• Cheap - no need to come up with a new molecule
Allarakhia, Minna. "Open-source approaches for the repurposing of existing or failed candidate drugs: learning from
and applying the lessons across diseases." Drug design, development and therapy 7 (2013): 753
!4
Aug/26/2019
/50
Drug Target Interaction
• Input: 

• Drug - molecule

• Target - protein (biomarker)

• Output: Interaction (affinity score)

• Example: EGFR protein (cancer biomarker) has high affinity scores
with Lapatinib (anti-cancer drug)

• If other non anti-cancer drugs has high affinity scores with EGFR,
they can be candidates of an anti-cancer drug

!5
Aug/26/2019
/50
Inputs of DTI
• Sequence

• Molecule (SMILES format)

• Lapatinib: "CS(=O)(=O)CCNCC1=CC=C(O1)C2…"

• protein (FASTA format)

• EGFR: "MRPSGTAGAALLALLAALCPASRALE…"
!6
Aug/26/2019
/50
Sequence Representation
• Sequence: SMILES, FASTA, and text

• Vector representation

• One hot vector

• (word/character) Embedding - more information
• Once represented as a vector, we can apply many deep
learning methods
!7
Aug/26/2019
/50
Recent trends in word
embeddings
• Local contextual embeddings: Word2vec [1] 

• RNN based contextual embeddings: ELMO [2]

• Attention (w/o RNN) based contextual embeddings:
Transformer [3]

• The (current) final boss: BERT [4] 

(Transformer+Masked LM)
[1] Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." NIPS 2013.
[2] Peters, Matthew E., et al. "Deep contextualized word representations." NAACL (2018).
[3] Vaswani, Ashish, et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017.
[4] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).!8
Aug/26/2019
/50
Word2Vec
• Word representation: Word -> Vector

• W2V: local context words when calculating a representation vector
for a target word

• EX) When inferring the red word, 4 context words (blues) are
used
!9
Aug/26/2019
/50
Word2Vec
• How to train word2vec

• Example sentence: 

I go to Emory University located in Atlanta.

• input - context words

• "I", "go", "Emory" "University"

• output - target word, "to"
!10
Aug/26/2019
/50
ELMO
• Concats of independently trained left-to-right and right- to-
left LSTM 

• It considers all words in a sentence to represent a word

• Long sequence -> information vanishing problem
!11
Aug/26/2019
/50
How to train ELMO
• For the simplicity I assume word level embeddings (but actually they use character
level embeddings)

• How to train word2vec

• Example sentence: 

I go to Emory University located in Atlanta.

• Input

• left-to-right model: "I", "go"

• right-to-left model: "Atlanta", "in" "located" "University", "Emory"

• Output

• "to"
!12
Aug/26/2019
/50
Transformer
• Calculating a vector for a word using all words in the sentence

• Attention is all you need!

• replaces Embedding+RNN with the transformer (self-attention)
!13
Aug/26/2019
/50
Transformer
• Model for machine translation

• Trained (sub) model can be used as word representation
model
* All transformer figures in this slide are from http://jalammar.github.io/illustrated-transformer/
!14
Aug/26/2019
/50
Encoder-Decoder
!15
Aug/26/2019
/50
Encoder
• Encoder can be stacked on top of each other (sequence length is preserved)

• Input words are transformed into randomly initialized vectors, x_i

• Encoder consists of two parts; self-attention and feed forward
!16
Aug/26/2019
/50
Self-Attention
High level explanation
• The vector for the token "it_" can be calculated as weighted
sum (Attention) of all tokens in the same sentence (Self).
!17
Aug/26/2019
/50
Weighted Sum
• Get three helper vectors

• For a given token, calculate
the scores of all other tokens 

• Normalize those scores to get
weights

• Weighted sum
!18
Aug/26/2019
/50
Three (Helper) Vectors
• Query, Key, and Value vector are used when calculating hidden representations

• These helper vectors are just a projection from trainable params Wq, Wk, and Wv
!19
Aug/26/2019
/50
Scoring
• Calculate scores for each word with respect to token "Thinking" using (query, key)

• For example: "Thinking": 112, "Machines: "96"

• Repeat this for all other tokens

• The vector, values, will be used in the next step
!20
Aug/26/2019
/50
Self-Attentions
• Divide by 8

• The square root of
the dimension of the
key vectors (paper
used dim=64)

• Softmax: Normalize
scores to be sum to
one

• Hidden representation
is a weighted sum of
value vectors
!21
Aug/26/2019
/50
Multi-Heads
• Multi filters in CNN, Multi heads in Transformer

• 8 Heads: 8 sets of trainable params Wq, Wk, and Wv, 8 sets of z1 and z2

• Expecting different heads to learn different aspects
!22
Aug/26/2019
/50
FeedForward
This is the output of the one encoding layer
(R)
!23
Aug/26/2019
/50
Positional Encoding
• Why PE? - Need to distinguish "I am a student" vs "am student I a"

• Special patterns representing order of words
!24
Aug/26/2019
/50
BERT
• Google AI Language Team

• 10 month ago

• 1100+ citations

• SOTA on eleven natural language processing tasks

• Outperforms human on SQuaD task

• Transformer + New language model task
!25
Aug/26/2019
/50
Overview
• Based on the Transformers

• Two new tasks

• Modified input representation
Transformer
Transformer
Transformer
MaskedLM IsNext
!26
Aug/26/2019
/50
Input Representation
• Segment Embedding (0:first sentence, 1:second sentence)

• Position Embedding (same as Transformer)
used for classification tasks sentence separator
* Adopted from the BERT paper
!27
Aug/26/2019
/50
Masked LM Task
• Given some "masked tokens" in a sentence, the task is to predict the original tokens

• Original: I am a student

• Input: I [MASK] a student, location (1)

• Output: am

• Downsides of this

• [MASK] token is never seen during fine-tuning

• 80% [MASK], 10% random token, 10% no change

• 15% of [MASK] -> should be slow to learn a general language model

• 4 days of TPU (v2-128) (worth of $10,000 google cloud credits)
!28
Aug/26/2019
/50
IsNext Task
• Input consists of two sentences

• Original: I am a student / I go to Emory

• Input: [CLS] I [MASK] a student [SEP] I go [MASK]
Emory [SEP], (masked token location, "2", "8")

• Output of MaskedLM: 

"am", "to"

• Output of IsNext: 

1 (2nd one is the next sentence of 1st one)
!29
Aug/26/2019
/50
Model Config
• Base

• Hidden-dim: 768 

• A-Head: 12

• Layer: 12

• 110M parameters
• Large

• Hidden-dim: 1024 

• A-Head: 16

• Layer: 24

• 340M parameters
!30
Aug/26/2019
/50
Finetuning (1/4)
Sentence Pair Classification
• MRPC

• One sentence is a
paraphrased one of the
other

• Task is to predict if given
two sentences are
semantically equivalent

• X: ("I go to Emory", 

"I am an Emory student")

• Y: yes (equivalent)
* From the BERT paper
!31
Aug/26/2019
/50
Finetuning (2/4)
Single Sentence Classification
• SST

• Movie review

• X: This movie is fun

• Y: positive
* From the BERT paper
!32
Aug/26/2019
/50
Finetuning (3/4)
Question Answering
• SQuAD

• Given question and
paragraph pairs, the task is to
select a word/phrases that
could answer the given
question 

• X: (Q: "Where is Emory?", P:
"Emory University is
a private research university in
the Druid Hills neighborhood
of the city of Atlanta,
Georgia, United States")

• Y: (Druid, States)
* From the BERT paper
!33
Aug/26/2019
/50
Finetuning (4/4)
Single Sentence Tagging (NER)
• Named Entity Recognition (NER)

• Named entity?

• Organization (Emory)

• People (Bill Gates)

• Location (Atlanta) …

• Given a sentence the task is to tag
each word if it indicates a certain
named entity 

• X: "Dr. Xiong is a professor at
Emory"

• Y: B-PER I-PER O O O O B-ORG
* From the BERT paper
!34
Aug/26/2019
/50
DeepDTA
• DeepDTA: Deep drug target
affinity

• Previous SOTA in DTI

• Bioinformatics (IF=5.481)

• Task: predicting affinity scores

• One-hot
embedding+CNN+Dense
Drug
Vector
Target
Vector
CNN
Regression
CNN
FFNN
!35
Aug/26/2019
/50
Convolution Operation
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
5
!36
Aug/26/2019
/50
Convolution Operation
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
35
!37
Aug/26/2019
/50
Convolution Operation
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
35 8
!38
Aug/26/2019
/50
Convolution Operation
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
35 8 2XX X XX XXX X XX XX X XX XX X XX
!39
Aug/26/2019
/50
Convolution Operation
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
35 8 2XX X XX XXX X XX XX X XX XX X XX
MRPSGTAGAALLALLAALCPASRALEEKKVCQGTSNKLTQLGTFEDHFLSLQRMFNNCEV…APQSSEFIGA
YY Y YYY Y YY YY YY YY…YY Y YY Y YY Y
Drug
Vector
Target
Vector
CNN
Regression
CNN
FFNN
!40
Aug/26/2019
/50
Limitations of CNN
• F and Cl will be convolved together: but they are actually in long distance

• Cl will never be convolved with N: but they are closer than F

• Local context (CNN) -> global context (self-attention)
CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl
!41
Aug/26/2019
/50
Molecule Transformer
• BERT based sequence representation

• Pre-training: only masked LM 

• Special tokens - [CLS] [MASK] [SEP] vs [REP] [MASK] [BEGIN] [END]
!42
Aug/26/2019
/50
Special Tokens
• [REP]: same as [CLS]

• [BEGIN]/[END]: indicates truncation of long sequence (>100)

• length<100: [REP] [BEGIN] C N = C = O [END] 

• length>100: [REP] C C = ( N == C) Cl … O = C
!43
Aug/26/2019
/50
Pre-train
• PubChem database 

• 97,092,853 molecule 

• Parameters: layers 8, heads
8, and hidden vector size 128 

• 8-core TPU machine, the
pre-training took about 58
hours. 

• Masked LM task result:
0.9727 

(BERT was 0.9855)
MT-DTI
We choose 15% of SMILES tokens at random for each molecule sequence, and
he chosen token with one of the special tokens, [MASK] with the probability of 0.8.
other 20% of the time, we replace the chosen token with a random SMILES token2
rve the chosen token, with an equal probability, respectively. The target label of
is the chosen token with the index. For example, one possible prediction task for
socyanate (CN=C=O) is
input : [REP] [BEGIN] C N = [MASK] = O [END]
label : (C, 5)
ine-tuning
hts of the pre-trained Transformers (Section 2.2.4) are used to initialize the Molecule
mers in the proposed MT-DTI model (Figure 1). The output of the Transformers is
Methyl isocyanate (CN=C=O)
!44
Aug/26/2019
/50
Fine-tuning
• Protein uses CNN without pre-training - small number of proteins
!45
Aug/26/2019
/50
Evaluation Metrics
• C-Index

• probability of being correctly
ordered of two random samples

• MSE (mean square error)

• Metric used in QSAR[1]

• r^2 and r_0^2 are the squared
correlation coefficients with and
without intercept, respectively. 

• Acceptable model : value
greater than 0.5 

• AUPR - Area under the precision-
recall curve
Concordance Index (C-Index)
QSAR
!46
[1] Partha Pratim Roy, Somnath Paul, Indrani Mitra, and Kunal Roy. On two novel parameters for validation of predictive qsar models. Molecules, 14(5):1660–1701, 2009.
Aug/26/2019
/50
Result
• Five-fold CV

• MT-DTI outperforms all the other methods in all of the four metrics 

• MT-DTIw/oFT 

• Outperforms the similarity based metrics

• Performs better than Deep-DTA for some metrics.
!47
Aug/26/2019
/50
Case Study Design
• Goal: To find drugs (among FDA-approved drugs) targeting a
specific protein, epidermal growth factor receptor (EGFR) 

• FDA-approved drugs: 1794 molecules in the DrugBank
database

• EGFR: a well-known gene related to many cancer types 

• Method: infer scores between EGFR and the 1,794 selected
drugs and sort in descending order

• Expected result: Actual EGFR targeting drugs will be highly
ranked
!48
Aug/26/2019
/50
Case Study Result
• All existing EGFR
drugs (8 out of 1794
drugs) are listed in top
30

• KIBA scores>12.1
indicates it has
binding with the target

• Other non EGFR
drugs might possibly
be a new anti-cancer
drug candidate
!49
Aug/26/2019
/50
Discussion
• Summary

• Pre-train self-attention network with 97M molecules

• Fine-tune the self-attention network for DTI prediction

• Results

• A new SOTA of DTI

• Promising drug candidates targeting a specific protein

• Published to MLHC’19 (JMLR)

• Future direction

• Molecule generation
• Molecule optimization
!50

Contenu connexe

Tendances

Lecture 9 slides: Machine learning for Protein Structure ...
Lecture 9 slides: Machine learning for Protein Structure ...Lecture 9 slides: Machine learning for Protein Structure ...
Lecture 9 slides: Machine learning for Protein Structure ...
butest
 

Tendances (20)

Virtual Screening in Drug Discovery
Virtual Screening in Drug DiscoveryVirtual Screening in Drug Discovery
Virtual Screening in Drug Discovery
 
Introduction to systems biology
Introduction to systems biologyIntroduction to systems biology
Introduction to systems biology
 
Docking Score Functions
Docking Score FunctionsDocking Score Functions
Docking Score Functions
 
Target identification in drug discovery
Target identification in drug discoveryTarget identification in drug discovery
Target identification in drug discovery
 
Proteomics
ProteomicsProteomics
Proteomics
 
Lecture 9 slides: Machine learning for Protein Structure ...
Lecture 9 slides: Machine learning for Protein Structure ...Lecture 9 slides: Machine learning for Protein Structure ...
Lecture 9 slides: Machine learning for Protein Structure ...
 
STRUCTURE BASED DRUG DESIGN - MOLECULAR MODELLING AND DRUG DISCOVERY
STRUCTURE BASED DRUG DESIGN - MOLECULAR MODELLING AND DRUG DISCOVERYSTRUCTURE BASED DRUG DESIGN - MOLECULAR MODELLING AND DRUG DISCOVERY
STRUCTURE BASED DRUG DESIGN - MOLECULAR MODELLING AND DRUG DISCOVERY
 
molecular docking its types and de novo drug design and application and softw...
molecular docking its types and de novo drug design and application and softw...molecular docking its types and de novo drug design and application and softw...
molecular docking its types and de novo drug design and application and softw...
 
Protein Structure Prediction
Protein Structure PredictionProtein Structure Prediction
Protein Structure Prediction
 
Scalar data types
Scalar data typesScalar data types
Scalar data types
 
In silico drug design/Molecular docking
In silico drug design/Molecular dockingIn silico drug design/Molecular docking
In silico drug design/Molecular docking
 
Role of bioinformatics and pharmacogenomics in drug discovery
Role of bioinformatics and pharmacogenomics in drug discoveryRole of bioinformatics and pharmacogenomics in drug discovery
Role of bioinformatics and pharmacogenomics in drug discovery
 
protein-protein interaction
protein-protein  interactionprotein-protein  interaction
protein-protein interaction
 
Computational predictiction of prrotein structure
Computational predictiction of prrotein structureComputational predictiction of prrotein structure
Computational predictiction of prrotein structure
 
Ligand based drug desighning
Ligand based drug desighningLigand based drug desighning
Ligand based drug desighning
 
Scoring function
Scoring functionScoring function
Scoring function
 
Target identification and validation
Target identification and validationTarget identification and validation
Target identification and validation
 
Protein fold recognition and ab_initio modeling
Protein fold recognition and ab_initio modelingProtein fold recognition and ab_initio modeling
Protein fold recognition and ab_initio modeling
 
Rational drug design
Rational drug designRational drug design
Rational drug design
 
Basics Of Molecular Docking
Basics Of Molecular DockingBasics Of Molecular Docking
Basics Of Molecular Docking
 

Similaire à Deep learning based drug protein interaction

Similaire à Deep learning based drug protein interaction (20)

Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...
 
sa-mincut-aditya.ppt
sa-mincut-aditya.pptsa-mincut-aditya.ppt
sa-mincut-aditya.ppt
 
sa.ppt
sa.pptsa.ppt
sa.ppt
 
What will they need? Pre-assessment techniques for instruction session.
What will they need?  Pre-assessment techniques for instruction session.What will they need?  Pre-assessment techniques for instruction session.
What will they need? Pre-assessment techniques for instruction session.
 
sa-mincut-aditya.ppt
sa-mincut-aditya.pptsa-mincut-aditya.ppt
sa-mincut-aditya.ppt
 
Artificial Unintelligence:Why and How Automated Essay Scoring Doesn’t Work (m...
Artificial Unintelligence:Why and How Automated Essay Scoring Doesn’t Work (m...Artificial Unintelligence:Why and How Automated Essay Scoring Doesn’t Work (m...
Artificial Unintelligence:Why and How Automated Essay Scoring Doesn’t Work (m...
 
Word 2 vector
Word 2 vectorWord 2 vector
Word 2 vector
 
FLEAT VI - Harvard University - Piet Desmet & Bert Wylin
FLEAT VI - Harvard University - Piet Desmet & Bert WylinFLEAT VI - Harvard University - Piet Desmet & Bert Wylin
FLEAT VI - Harvard University - Piet Desmet & Bert Wylin
 
Data Science Course In Pune
Data Science Course In Pune Data Science Course In Pune
Data Science Course In Pune
 
data science institute in bangalore
data science institute in bangaloredata science institute in bangalore
data science institute in bangalore
 
Data Science Course Pune
Data Science Course PuneData Science Course Pune
Data Science Course Pune
 
Data science course pdf
Data science course pdfData science course pdf
Data science course pdf
 
data science courses in banglore
data science courses in bangloredata science courses in banglore
data science courses in banglore
 
Data Science Course
Data Science CourseData Science Course
Data Science Course
 
Data Science Course
Data Science CourseData Science Course
Data Science Course
 
data science certification
data science certificationdata science certification
data science certification
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
 
Data mining
Data miningData mining
Data mining
 
data science certification
data science certificationdata science certification
data science certification
 
data science institute in bangalore
data science institute in bangaloredata science institute in bangalore
data science institute in bangalore
 

Plus de NAVER Engineering

Plus de NAVER Engineering (20)

React vac pattern
React vac patternReact vac pattern
React vac pattern
 
디자인 시스템에 직방 ZUIX
디자인 시스템에 직방 ZUIX디자인 시스템에 직방 ZUIX
디자인 시스템에 직방 ZUIX
 
진화하는 디자인 시스템(걸음마 편)
진화하는 디자인 시스템(걸음마 편)진화하는 디자인 시스템(걸음마 편)
진화하는 디자인 시스템(걸음마 편)
 
서비스 운영을 위한 디자인시스템 프로젝트
서비스 운영을 위한 디자인시스템 프로젝트서비스 운영을 위한 디자인시스템 프로젝트
서비스 운영을 위한 디자인시스템 프로젝트
 
BPL(Banksalad Product Language) 무야호
BPL(Banksalad Product Language) 무야호BPL(Banksalad Product Language) 무야호
BPL(Banksalad Product Language) 무야호
 
이번 생에 디자인 시스템은 처음이라
이번 생에 디자인 시스템은 처음이라이번 생에 디자인 시스템은 처음이라
이번 생에 디자인 시스템은 처음이라
 
날고 있는 여러 비행기 넘나 들며 정비하기
날고 있는 여러 비행기 넘나 들며 정비하기날고 있는 여러 비행기 넘나 들며 정비하기
날고 있는 여러 비행기 넘나 들며 정비하기
 
쏘카프레임 구축 배경과 과정
 쏘카프레임 구축 배경과 과정 쏘카프레임 구축 배경과 과정
쏘카프레임 구축 배경과 과정
 
플랫폼 디자이너 없이 디자인 시스템을 구축하는 프로덕트 디자이너의 우당탕탕 고통 연대기
플랫폼 디자이너 없이 디자인 시스템을 구축하는 프로덕트 디자이너의 우당탕탕 고통 연대기플랫폼 디자이너 없이 디자인 시스템을 구축하는 프로덕트 디자이너의 우당탕탕 고통 연대기
플랫폼 디자이너 없이 디자인 시스템을 구축하는 프로덕트 디자이너의 우당탕탕 고통 연대기
 
200820 NAVER TECH CONCERT 15_Code Review is Horse(코드리뷰는 말이야)(feat.Latte)
200820 NAVER TECH CONCERT 15_Code Review is Horse(코드리뷰는 말이야)(feat.Latte)200820 NAVER TECH CONCERT 15_Code Review is Horse(코드리뷰는 말이야)(feat.Latte)
200820 NAVER TECH CONCERT 15_Code Review is Horse(코드리뷰는 말이야)(feat.Latte)
 
200819 NAVER TECH CONCERT 03_화려한 코루틴이 내 앱을 감싸네! 코루틴으로 작성해보는 깔끔한 비동기 코드
200819 NAVER TECH CONCERT 03_화려한 코루틴이 내 앱을 감싸네! 코루틴으로 작성해보는 깔끔한 비동기 코드200819 NAVER TECH CONCERT 03_화려한 코루틴이 내 앱을 감싸네! 코루틴으로 작성해보는 깔끔한 비동기 코드
200819 NAVER TECH CONCERT 03_화려한 코루틴이 내 앱을 감싸네! 코루틴으로 작성해보는 깔끔한 비동기 코드
 
200819 NAVER TECH CONCERT 10_맥북에서도 아이맥프로에서 빌드하는 것처럼 빌드 속도 빠르게 하기
200819 NAVER TECH CONCERT 10_맥북에서도 아이맥프로에서 빌드하는 것처럼 빌드 속도 빠르게 하기200819 NAVER TECH CONCERT 10_맥북에서도 아이맥프로에서 빌드하는 것처럼 빌드 속도 빠르게 하기
200819 NAVER TECH CONCERT 10_맥북에서도 아이맥프로에서 빌드하는 것처럼 빌드 속도 빠르게 하기
 
200819 NAVER TECH CONCERT 08_성능을 고민하는 슬기로운 개발자 생활
200819 NAVER TECH CONCERT 08_성능을 고민하는 슬기로운 개발자 생활200819 NAVER TECH CONCERT 08_성능을 고민하는 슬기로운 개발자 생활
200819 NAVER TECH CONCERT 08_성능을 고민하는 슬기로운 개발자 생활
 
200819 NAVER TECH CONCERT 05_모르면 손해보는 Android 디버깅/분석 꿀팁 대방출
200819 NAVER TECH CONCERT 05_모르면 손해보는 Android 디버깅/분석 꿀팁 대방출200819 NAVER TECH CONCERT 05_모르면 손해보는 Android 디버깅/분석 꿀팁 대방출
200819 NAVER TECH CONCERT 05_모르면 손해보는 Android 디버깅/분석 꿀팁 대방출
 
200819 NAVER TECH CONCERT 09_Case.xcodeproj - 좋은 동료로 거듭나기 위한 노하우
200819 NAVER TECH CONCERT 09_Case.xcodeproj - 좋은 동료로 거듭나기 위한 노하우200819 NAVER TECH CONCERT 09_Case.xcodeproj - 좋은 동료로 거듭나기 위한 노하우
200819 NAVER TECH CONCERT 09_Case.xcodeproj - 좋은 동료로 거듭나기 위한 노하우
 
200820 NAVER TECH CONCERT 14_야 너두 할 수 있어. 비전공자, COBOL 개발자를 거쳐 네이버에서 FE 개발하게 된...
200820 NAVER TECH CONCERT 14_야 너두 할 수 있어. 비전공자, COBOL 개발자를 거쳐 네이버에서 FE 개발하게 된...200820 NAVER TECH CONCERT 14_야 너두 할 수 있어. 비전공자, COBOL 개발자를 거쳐 네이버에서 FE 개발하게 된...
200820 NAVER TECH CONCERT 14_야 너두 할 수 있어. 비전공자, COBOL 개발자를 거쳐 네이버에서 FE 개발하게 된...
 
200820 NAVER TECH CONCERT 13_네이버에서 오픈 소스 개발을 통해 성장하는 방법
200820 NAVER TECH CONCERT 13_네이버에서 오픈 소스 개발을 통해 성장하는 방법200820 NAVER TECH CONCERT 13_네이버에서 오픈 소스 개발을 통해 성장하는 방법
200820 NAVER TECH CONCERT 13_네이버에서 오픈 소스 개발을 통해 성장하는 방법
 
200820 NAVER TECH CONCERT 12_상반기 네이버 인턴을 돌아보며
200820 NAVER TECH CONCERT 12_상반기 네이버 인턴을 돌아보며200820 NAVER TECH CONCERT 12_상반기 네이버 인턴을 돌아보며
200820 NAVER TECH CONCERT 12_상반기 네이버 인턴을 돌아보며
 
200820 NAVER TECH CONCERT 11_빠르게 성장하는 슈퍼루키로 거듭나기
200820 NAVER TECH CONCERT 11_빠르게 성장하는 슈퍼루키로 거듭나기200820 NAVER TECH CONCERT 11_빠르게 성장하는 슈퍼루키로 거듭나기
200820 NAVER TECH CONCERT 11_빠르게 성장하는 슈퍼루키로 거듭나기
 
200819 NAVER TECH CONCERT 07_신입 iOS 개발자 개발업무 적응기
200819 NAVER TECH CONCERT 07_신입 iOS 개발자 개발업무 적응기200819 NAVER TECH CONCERT 07_신입 iOS 개발자 개발업무 적응기
200819 NAVER TECH CONCERT 07_신입 iOS 개발자 개발업무 적응기
 

Dernier

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 

Deep learning based drug protein interaction

  • 1. Aug/26/2019 Deep Learning based Drug Discovery Bonggun Shin 1
  • 2. Aug/26/2019 /50 Outline • Problem definition • Drug discovery process • Drug target interaction (DTI) • Background • Sequence data in DTI • Recent trends in word embeddings • Previous SOTA in DTI • Molecule transformer !2
  • 3. Aug/26/2019 /50 Drug Discovery Process Target Identification Molecule Discovery Molecule Optimization Clinical Test FDA Approval Repurposing Generating • Green - Physical or computer based (in-silico) experiments • Yellow - Animal and human experiments !3
  • 4. Aug/26/2019 /50 Drug Repurposing • Safe - already approved drugs • Cheap - no need to come up with a new molecule Allarakhia, Minna. "Open-source approaches for the repurposing of existing or failed candidate drugs: learning from and applying the lessons across diseases." Drug design, development and therapy 7 (2013): 753 !4
  • 5. Aug/26/2019 /50 Drug Target Interaction • Input: • Drug - molecule • Target - protein (biomarker) • Output: Interaction (affinity score) • Example: EGFR protein (cancer biomarker) has high affinity scores with Lapatinib (anti-cancer drug) • If other non anti-cancer drugs has high affinity scores with EGFR, they can be candidates of an anti-cancer drug
 !5
  • 6. Aug/26/2019 /50 Inputs of DTI • Sequence • Molecule (SMILES format) • Lapatinib: "CS(=O)(=O)CCNCC1=CC=C(O1)C2…" • protein (FASTA format) • EGFR: "MRPSGTAGAALLALLAALCPASRALE…" !6
  • 7. Aug/26/2019 /50 Sequence Representation • Sequence: SMILES, FASTA, and text • Vector representation • One hot vector • (word/character) Embedding - more information • Once represented as a vector, we can apply many deep learning methods !7
  • 8. Aug/26/2019 /50 Recent trends in word embeddings • Local contextual embeddings: Word2vec [1] • RNN based contextual embeddings: ELMO [2] • Attention (w/o RNN) based contextual embeddings: Transformer [3] • The (current) final boss: BERT [4] 
 (Transformer+Masked LM) [1] Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." NIPS 2013. [2] Peters, Matthew E., et al. "Deep contextualized word representations." NAACL (2018). [3] Vaswani, Ashish, et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017. [4] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).!8
  • 9. Aug/26/2019 /50 Word2Vec • Word representation: Word -> Vector • W2V: local context words when calculating a representation vector for a target word • EX) When inferring the red word, 4 context words (blues) are used !9
  • 10. Aug/26/2019 /50 Word2Vec • How to train word2vec • Example sentence: 
 I go to Emory University located in Atlanta. • input - context words • "I", "go", "Emory" "University" • output - target word, "to" !10
  • 11. Aug/26/2019 /50 ELMO • Concats of independently trained left-to-right and right- to- left LSTM • It considers all words in a sentence to represent a word • Long sequence -> information vanishing problem !11
  • 12. Aug/26/2019 /50 How to train ELMO • For the simplicity I assume word level embeddings (but actually they use character level embeddings) • How to train word2vec • Example sentence: 
 I go to Emory University located in Atlanta. • Input • left-to-right model: "I", "go" • right-to-left model: "Atlanta", "in" "located" "University", "Emory" • Output • "to" !12
  • 13. Aug/26/2019 /50 Transformer • Calculating a vector for a word using all words in the sentence • Attention is all you need! • replaces Embedding+RNN with the transformer (self-attention) !13
  • 14. Aug/26/2019 /50 Transformer • Model for machine translation • Trained (sub) model can be used as word representation model * All transformer figures in this slide are from http://jalammar.github.io/illustrated-transformer/ !14
  • 16. Aug/26/2019 /50 Encoder • Encoder can be stacked on top of each other (sequence length is preserved) • Input words are transformed into randomly initialized vectors, x_i • Encoder consists of two parts; self-attention and feed forward !16
  • 17. Aug/26/2019 /50 Self-Attention High level explanation • The vector for the token "it_" can be calculated as weighted sum (Attention) of all tokens in the same sentence (Self). !17
  • 18. Aug/26/2019 /50 Weighted Sum • Get three helper vectors • For a given token, calculate the scores of all other tokens • Normalize those scores to get weights • Weighted sum !18
  • 19. Aug/26/2019 /50 Three (Helper) Vectors • Query, Key, and Value vector are used when calculating hidden representations • These helper vectors are just a projection from trainable params Wq, Wk, and Wv !19
  • 20. Aug/26/2019 /50 Scoring • Calculate scores for each word with respect to token "Thinking" using (query, key) • For example: "Thinking": 112, "Machines: "96" • Repeat this for all other tokens • The vector, values, will be used in the next step !20
  • 21. Aug/26/2019 /50 Self-Attentions • Divide by 8 • The square root of the dimension of the key vectors (paper used dim=64) • Softmax: Normalize scores to be sum to one • Hidden representation is a weighted sum of value vectors !21
  • 22. Aug/26/2019 /50 Multi-Heads • Multi filters in CNN, Multi heads in Transformer • 8 Heads: 8 sets of trainable params Wq, Wk, and Wv, 8 sets of z1 and z2 • Expecting different heads to learn different aspects !22
  • 23. Aug/26/2019 /50 FeedForward This is the output of the one encoding layer (R) !23
  • 24. Aug/26/2019 /50 Positional Encoding • Why PE? - Need to distinguish "I am a student" vs "am student I a" • Special patterns representing order of words !24
  • 25. Aug/26/2019 /50 BERT • Google AI Language Team • 10 month ago • 1100+ citations • SOTA on eleven natural language processing tasks • Outperforms human on SQuaD task • Transformer + New language model task !25
  • 26. Aug/26/2019 /50 Overview • Based on the Transformers • Two new tasks • Modified input representation Transformer Transformer Transformer MaskedLM IsNext !26
  • 27. Aug/26/2019 /50 Input Representation • Segment Embedding (0:first sentence, 1:second sentence) • Position Embedding (same as Transformer) used for classification tasks sentence separator * Adopted from the BERT paper !27
  • 28. Aug/26/2019 /50 Masked LM Task • Given some "masked tokens" in a sentence, the task is to predict the original tokens • Original: I am a student • Input: I [MASK] a student, location (1) • Output: am • Downsides of this • [MASK] token is never seen during fine-tuning • 80% [MASK], 10% random token, 10% no change • 15% of [MASK] -> should be slow to learn a general language model • 4 days of TPU (v2-128) (worth of $10,000 google cloud credits) !28
  • 29. Aug/26/2019 /50 IsNext Task • Input consists of two sentences • Original: I am a student / I go to Emory • Input: [CLS] I [MASK] a student [SEP] I go [MASK] Emory [SEP], (masked token location, "2", "8") • Output of MaskedLM: 
 "am", "to" • Output of IsNext: 
 1 (2nd one is the next sentence of 1st one) !29
  • 30. Aug/26/2019 /50 Model Config • Base • Hidden-dim: 768 • A-Head: 12 • Layer: 12 • 110M parameters • Large • Hidden-dim: 1024 • A-Head: 16 • Layer: 24 • 340M parameters !30
  • 31. Aug/26/2019 /50 Finetuning (1/4) Sentence Pair Classification • MRPC • One sentence is a paraphrased one of the other • Task is to predict if given two sentences are semantically equivalent • X: ("I go to Emory", 
 "I am an Emory student") • Y: yes (equivalent) * From the BERT paper !31
  • 32. Aug/26/2019 /50 Finetuning (2/4) Single Sentence Classification • SST • Movie review • X: This movie is fun • Y: positive * From the BERT paper !32
  • 33. Aug/26/2019 /50 Finetuning (3/4) Question Answering • SQuAD • Given question and paragraph pairs, the task is to select a word/phrases that could answer the given question • X: (Q: "Where is Emory?", P: "Emory University is a private research university in the Druid Hills neighborhood of the city of Atlanta, Georgia, United States") • Y: (Druid, States) * From the BERT paper !33
  • 34. Aug/26/2019 /50 Finetuning (4/4) Single Sentence Tagging (NER) • Named Entity Recognition (NER) • Named entity? • Organization (Emory) • People (Bill Gates) • Location (Atlanta) … • Given a sentence the task is to tag each word if it indicates a certain named entity • X: "Dr. Xiong is a professor at Emory" • Y: B-PER I-PER O O O O B-ORG * From the BERT paper !34
  • 35. Aug/26/2019 /50 DeepDTA • DeepDTA: Deep drug target affinity • Previous SOTA in DTI • Bioinformatics (IF=5.481) • Task: predicting affinity scores • One-hot embedding+CNN+Dense Drug Vector Target Vector CNN Regression CNN FFNN !35
  • 40. Aug/26/2019 /50 Convolution Operation CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl 35 8 2XX X XX XXX X XX XX X XX XX X XX MRPSGTAGAALLALLAALCPASRALEEKKVCQGTSNKLTQLGTFEDHFLSLQRMFNNCEV…APQSSEFIGA YY Y YYY Y YY YY YY YY…YY Y YY Y YY Y Drug Vector Target Vector CNN Regression CNN FFNN !40
  • 41. Aug/26/2019 /50 Limitations of CNN • F and Cl will be convolved together: but they are actually in long distance • Cl will never be convolved with N: but they are closer than F • Local context (CNN) -> global context (self-attention) CS(=O)(=O)CCNCC1=CC=C(O1)C2=CC3=C(C=C2)N=CN=C3NC4=CC(=C(C=C4)OCC5=CC(=CC=C5)F)Cl !41
  • 42. Aug/26/2019 /50 Molecule Transformer • BERT based sequence representation • Pre-training: only masked LM • Special tokens - [CLS] [MASK] [SEP] vs [REP] [MASK] [BEGIN] [END] !42
  • 43. Aug/26/2019 /50 Special Tokens • [REP]: same as [CLS] • [BEGIN]/[END]: indicates truncation of long sequence (>100) • length<100: [REP] [BEGIN] C N = C = O [END] • length>100: [REP] C C = ( N == C) Cl … O = C !43
  • 44. Aug/26/2019 /50 Pre-train • PubChem database • 97,092,853 molecule • Parameters: layers 8, heads 8, and hidden vector size 128 • 8-core TPU machine, the pre-training took about 58 hours. • Masked LM task result: 0.9727 
 (BERT was 0.9855) MT-DTI We choose 15% of SMILES tokens at random for each molecule sequence, and he chosen token with one of the special tokens, [MASK] with the probability of 0.8. other 20% of the time, we replace the chosen token with a random SMILES token2 rve the chosen token, with an equal probability, respectively. The target label of is the chosen token with the index. For example, one possible prediction task for socyanate (CN=C=O) is input : [REP] [BEGIN] C N = [MASK] = O [END] label : (C, 5) ine-tuning hts of the pre-trained Transformers (Section 2.2.4) are used to initialize the Molecule mers in the proposed MT-DTI model (Figure 1). The output of the Transformers is Methyl isocyanate (CN=C=O) !44
  • 45. Aug/26/2019 /50 Fine-tuning • Protein uses CNN without pre-training - small number of proteins !45
  • 46. Aug/26/2019 /50 Evaluation Metrics • C-Index • probability of being correctly ordered of two random samples • MSE (mean square error) • Metric used in QSAR[1] • r^2 and r_0^2 are the squared correlation coefficients with and without intercept, respectively. • Acceptable model : value greater than 0.5 • AUPR - Area under the precision- recall curve Concordance Index (C-Index) QSAR !46 [1] Partha Pratim Roy, Somnath Paul, Indrani Mitra, and Kunal Roy. On two novel parameters for validation of predictive qsar models. Molecules, 14(5):1660–1701, 2009.
  • 47. Aug/26/2019 /50 Result • Five-fold CV • MT-DTI outperforms all the other methods in all of the four metrics • MT-DTIw/oFT • Outperforms the similarity based metrics • Performs better than Deep-DTA for some metrics. !47
  • 48. Aug/26/2019 /50 Case Study Design • Goal: To find drugs (among FDA-approved drugs) targeting a specific protein, epidermal growth factor receptor (EGFR) • FDA-approved drugs: 1794 molecules in the DrugBank database • EGFR: a well-known gene related to many cancer types • Method: infer scores between EGFR and the 1,794 selected drugs and sort in descending order • Expected result: Actual EGFR targeting drugs will be highly ranked !48
  • 49. Aug/26/2019 /50 Case Study Result • All existing EGFR drugs (8 out of 1794 drugs) are listed in top 30 • KIBA scores>12.1 indicates it has binding with the target • Other non EGFR drugs might possibly be a new anti-cancer drug candidate !49
  • 50. Aug/26/2019 /50 Discussion • Summary • Pre-train self-attention network with 97M molecules • Fine-tune the self-attention network for DTI prediction • Results • A new SOTA of DTI • Promising drug candidates targeting a specific protein • Published to MLHC’19 (JMLR) • Future direction • Molecule generation • Molecule optimization !50