Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
A Deep Learning Approach to Antibiotic Discovery
PR-232
주성훈, Samsung SDS
2020. 3. 15.
1. Research Background
1. Research Background
Introduction
3/19
• AutoML
, architecture hyperparameter .
• (NAS, Neural Architecture Search)
• Hy...
1. Research Background
Architecture search
• - Constrained search space
• building block search algorithm , NAS .
• constr...
1. Research Background
AutoML-Zero
• We propose to automatically search for whole ML algorithms using little restriction
o...
1. Research Background
AutoML-Zero
• we propose to automatically search for whole ML algorithms using little restriction
o...
2. Methods
P=5, T=3 일 때,
2. Methods
Type (i)
랜덤 연산 삽입/삭제.
삭제 확률이 삽입 확률의 두 배
Type (ii)
함수 내 연산 전부 교체
Type (iii)
Argument 하나만 교체.
Real-...
2. Methods
Step 3 best algorithm
9/19
3. Experimental Results
3. Experimental Results
Random search (RS)
• Evolution / RS success rate :
Acceptable algorithms .
Acceptable algorithm ha...
3. Experimental Results
AutoML-Zero hand-designed reference (2-layer FC NN)
• CIFAR-10 MNIST task
• 10 class , binary clas...
3. Experimental Results
• Best model parameter (learning rate, uniform distribution mean ) Tselect dataset random search ....
3. Experimental Results
Challenging task AutoML-Zero
1) Few training examples
• Training dataset 80 100 epoch ,
AutoML-Zer...
3. Experimental Results
Challenging task AutoML-Zero
2) Fast training
• Training dataset 800 10 epoch ,
AutoML-Zero learni...
3. Experimental Results
Challenging task AutoML-Zero
3) Multiple classes
• CIFAR-10 10 ,
Learning rate sin(weight )
.
• Mu...
2) Functional equivalence checking(FEC)
•
•
4) Hurdle :
•
•
3. Experimental Results
1) Migration
•
17/19
4. Conclusion
4. Conclusions 19/19
Thank you.
• AutoML ambitious goal (
) .
• future work higher-order tensor function call search space...
You’ve finished this document.
Download and read it offline.
Upcoming SlideShare
What to Upload to SlideShare
Next
Upcoming SlideShare
What to Upload to SlideShare
Next
Download to read offline and view in fullscreen.

Share

PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch

Download to read offline

PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch

Paper link: https://arxiv.org/abs/2003.03384
Video presentation link: https://youtu.be/J__uJ79m01Q

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all

PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch

  1. 1. A Deep Learning Approach to Antibiotic Discovery PR-232 주성훈, Samsung SDS 2020. 3. 15.
  2. 2. 1. Research Background
  3. 3. 1. Research Background Introduction 3/19 • AutoML , architecture hyperparameter . • (NAS, Neural Architecture Search) • Hyperparameters • Learning rule (activation function, full forward pass, data augmentation, weight optimization, layer and weight pruning) AutoML https://arxiv.org/pdf/1810.13306.pdf
  4. 4. 1. Research Background Architecture search • - Constrained search space • building block search algorithm , NAS . • constrained search space . Search space : Saining Xie et al. (2019) https://arxiv.org/pdf/1904.01569.pdf PR-155 Golnaz Ghaisi et al. (2019) https://arxiv.org/pdf/1904.01569.pdf PR-166 Yanan Sun et al. (2019) https://arxiv.org/pdf/1710.10741.pdf 4/19
  5. 5. 1. Research Background AutoML-Zero • We propose to automatically search for whole ML algorithms using little restriction on form and only simple mathematical operations as building blocks.  Matrix decomposition derivative . 5/19
  6. 6. 1. Research Background AutoML-Zero • we propose to automatically search for whole ML algorithms using little restriction on form and only simple mathematical operations as building blocks. 백지상태에서 시작해서 최종 알고리즘 까지 정말 어마어마한 search space… 4일 6/19
  7. 7. 2. Methods
  8. 8. P=5, T=3 일 때, 2. Methods Type (i) 랜덤 연산 삽입/삭제. 삭제 확률이 삽입 확률의 두 배 Type (ii) 함수 내 연산 전부 교체 Type (iii) Argument 하나만 교체. Real-valued constant 수정할 때, [0.5, 2.0]사이의 수 임의선택 후 곱하고 10%의 확률로 부호 바꿈 Evolutionary method T만큼 랜덤선택 8/19
  9. 9. 2. Methods Step 3 best algorithm 9/19
  10. 10. 3. Experimental Results
  11. 11. 3. Experimental Results Random search (RS) • Evolution / RS success rate : Acceptable algorithms . Acceptable algorithm hand-designed reference model . • Task difficulty : RS 1 acceptable algorithms algorithms . ex) linear regression case RS 1 acceptable algorithms 107.4 , linear regressor task difficulty 7.4 . algorithm search space sparse , AutoML-Zero RS . 11/19 4 ops 7 ops 5 ops 9 ops
  12. 12. 3. Experimental Results AutoML-Zero hand-designed reference (2-layer FC NN) • CIFAR-10 MNIST task • 10 class , binary classification ; 10C2 = 45 pairs • pair 8000 train/ 2000 valid example • 45 36 – Tsearch (search task . 1~10 evolution cycle ) • 45 9 – Tselect ( best accuracy ) • CIFAR-10 test set final evaluation • Number of possible operations: 7/58/58 for Setup/Predict/Learn  Figure 6 1 illustration , (5, 20) . 12/19 • Training Epoch : 1 or 10; evolution parameter: P=100, T=10 • Maximum num. instructions for Setup/Predict/Learn: 21/21/45.
  13. 13. 3. Experimental Results • Best model parameter (learning rate, uniform distribution mean ) Tselect dataset random search . , linear/nonlinear baseline hyperparameters random search . • [CIFAR-10 ] 5 trial best algorithm accuracy : 84.06 0.10% Linear baseline : logistic regression, 77.65 0.22% Nonlinear baseline : 2-layer fully connected neural network, 82.22 0.17% • binary classification task : 1) SVHN (32 x 32 x 3) (88.12% AutoML-Zero vs. 59.58% linear baseline vs. 85.14% for nonlinear baseline) 2) down-sampled ImageNet (128 x 128 x 3) (80.78% vs. 76.44% vs. 78.44%) 3) Fashion MNIST (28 x 28 x 1) (98.60% vs. 97.90% vs. 98.21%).  search space design convolution batch normalization . AutoML-Zero hand-designed reference (2-layer FC NN) AutoML-Zero 2-layer FC NN . 13/19
  14. 14. 3. Experimental Results Challenging task AutoML-Zero 1) Few training examples • Training dataset 80 100 epoch , AutoML-Zero Noisy ReLU (dropout ) . • ? (80 examples) vs. (800 examples) 30 , (p<0.0005) noisy ReLU . 14/19
  15. 15. 3. Experimental Results Challenging task AutoML-Zero 2) Fast training • Training dataset 800 10 epoch , AutoML-Zero learning-rate decay . • ? 10 epoch vs. 100 epoch 30 , 10 epoch case 30 (30/30), 100 epoch case 3 (3/30) learning-rate decay . 15/19
  16. 16. 3. Experimental Results Challenging task AutoML-Zero 3) Multiple classes • CIFAR-10 10 , Learning rate sin(weight ) . • Multi-class vs. binary-class 30 , Binary-class (0/30) Multi-class 24 (24/30) . AutoML-Zero , . 16/19
  17. 17. 2) Functional equivalence checking(FEC) • • 4) Hurdle : • • 3. Experimental Results 1) Migration • 17/19
  18. 18. 4. Conclusion
  19. 19. 4. Conclusions 19/19 Thank you. • AutoML ambitious goal ( ) . • future work higher-order tensor function call search space . • AutoML-Zero (Setup, Predict, Learn) , linear regressors, neural networks, gradient descent, multiplicative interactions, weight averaging, normalized gradients . • AutoML-Zero . .
  • choeungjin

    Aug. 23, 2020
  • SunghoonJoo2

    May. 11, 2020
  • lvhyml

    Apr. 27, 2020

PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch Paper link: https://arxiv.org/abs/2003.03384 Video presentation link: https://youtu.be/J__uJ79m01Q

Views

Total views

577

On Slideshare

0

From embeds

0

Number of embeds

273

Actions

Downloads

41

Shares

0

Comments

0

Likes

3

×