12. 5%
8%
24%
27%
36%
Life Science & Health
Mobile
Enterprise & Data
Consumer
Commerce
9%
13%
23%
24%
31%
Life Science & Health
Consumer
Enterprise
Data & AI
Others
2014 2015
Investment of GoogleVentures in 2014-2015
13.
14.
15. What is most important factor in digital medicine?
16. “Data! Data! Data!” he cried.“I can’t
make bricks without clay!”
- Sherlock Holmes,“The Adventure of the Copper Beeches”
17.
18. Three Steps to Implement Digital Medicine
• Step 1. Measure the Data
• Step 2. Collect the Data
• Step 3. Insight from the Data
19. Digital Healthcare Industry Landscape
Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gaget/Apps
DNA
Artificial Intelligence
Telemedicine
2nd Opinion
Device
On Demand (O2O)
Wearables / IoT
3D Printer
Counseling
(ver. 1)
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
EMR/EHR
Data Platform
Accelerator/early-VC
20. Digital Healthcare Industry Landscape
Data Measurement Data Integration Data Interpretation Treatment
Smartphone Gaget/Apps
DNA
Artificial Intelligence
Telemedicine
Device
On Demand (O2O)
Wearables / IoT
3D Printer
Counseling
(ver. 0.6)
Digital Healthcare Institute
Diretor, Yoon Sup Choi, Ph.D.
yoonsup.choi@gmail.com
EMR/EHR
Data Platform
Accelerator/early-VC
38. PwC Health Research Institute Health wearables: Early days2
insurers—offering incentives for
use may gain traction. HRI’s survey
Source: HRI/CIS Wearables consumer survey 2014
21%
of US
consumers
currently
own a
wearable
technology
product
2%
wear it a few
times a month
2%
no longer
use it
7%
wear it a few
times a week
10%
wear it
everyday
Figure 2: Wearables are not mainstream – yet
Just one in five US consumers say they own a wearable device.
Intelligence Series sought to better
understand American consumers’
attitudes toward wearables through
done with the data.
PwC, Health wearables: early days, 2014
65. Human genomes are being sequenced at an ever-increasing rate. The 1000 Genomes Project has
aggregated hundreds of genomes; The Cancer Genome Atlas (TGCA) has gathered several thousand; and
the Exome Aggregation Consortium (ExAC) has sequenced more than 60,000 exomes. Dotted lines show
three possible future growth curves.
DNA SEQUENCING SOARS
2001 2005 2010 2015 2020 2025
100
103
106
109
Human Genome Project
Cumulativenumberofhumangenomes
1000 Genomes
TCGA
ExAC
Current amount
1st personal genome
Recorded growth
Projection
Double every 7 months (historical growth rate)
Double every 12 months (Illumina estimate)
Double every 18 months (Moore's law)
Michael Einsetein, Nature, 2015
66.
67. Step1. Measure the Data
• With your smartphone
• With wearable devices (connected to smartphone)
• Personal genome analysis
... without even going to the hospital!
72. Epic MyChart App Epic EHRDatabaseDexcom App
Withings App
Dexcom CGM
Nike+
Patients Device/Apps
HealthKit EHR Hospital
Whitings
+
• Data stored in DB on the iPhone (, not mirroring to the cloud)
• Consumer controls what data goes in/out, privacy level
• HealthKit connects/direct devices, store data based on privacy rules
Apple Watch
iPhone
73.
74. • 애플 HealthKit 가 미국의 23개 선도병원 중에, 14개의 병원과 협력
• 경쟁 플랫폼 Google Fit, S-Health 보다 현저히 빠른 움직임
• Beth Israel Deaconess 의 CIO
• “25만명의 환자들 중 상당수가 웨어러블로 각종 데이터 생산 중.
이 모든 디바이스에 인터페이스를 우리 병원은 제공할 수 없다.
하지만 애플이라면 가능하다.”
2015.2.5
84. • 약한 인공 지능 (Artificial Narrow Intelligence)
• 특정 방면에서 잘하는 인공지능
• 체스, 퀴즈, 메일 필터링, 상품 추천, 자율 운전
• 강한 인공 지능 (Artificial General Intelligence)
• 모든 방면에서 인간 급의 인공 지능
• 사고, 계획, 문제해결, 추상화, 복잡한 개념 학습
• 초 인공 지능 (Artificial Super Intelligence)
• 과학기술, 사회적 능력 등 모든 영역에서 인간보다 뛰어난 인공 지능
• “충분히 발달한 과학은 마법과 구분할 수 없다” - 아서 C. 클라크
93. 600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
94.
95. • Trained by 400 cases of historical patients cases
• Assessed accuracy OEA treatment suggestions
using MD Anderson’s physicians’ decision as benchmark
• When 200 leukemia cases were tested,
• False positive rate=2.9% (OEA 추천 치료법이 부정확한 경우)
• False negative rate=0.4% (정확한 치료법이 낮은 점수를 받은 경우)
• Overall accuracy of treatment recommendation=82.6%
• Conclusion: Suggested personalized treatment option
showed reasonably high accuracy
MDAnderson’s Oncology ExpertAdvisor Powered by IBM Watson
:AWeb-Based Cognitive Clinical Decision Support Tool
100. DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
101. FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
he other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
102. Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
v
om
Samy Bengio
Google
bengio@google.com
Dumitru Erhan
Google
dumitru@google.com
s a
cts
his
re-
m-
ed
he
de-
nts
A group of people
shopping at an
outdoor market.
!
There are many
vegetables at the
fruit stand.
Vision!
Deep CNN
Language !
Generating!
RNN
Figure 1. NIC, our model, is based end-to-end on a neural net-
work consisting of a vision CNN followed by a language gener-
103. Show and Tell:
A Neural Image Caption Generator
Vinyals, O. et al. (2015). Show and Tell:A Neural Image Caption Generator, arXiv:1411.4555
Figure 5. A selection of evaluation results, grouped by human rating.
104.
105. Business Area
Medical Image Analysis
VUNOnet and our machine learning technology will help doctors and hospitals manage
medical scans and images intelligently to make diagnosis faster and more accurately.
Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity
Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease
Digital Radiologist
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
106. Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Collaboration with Prof. Joon Beom Seo (Asan Medical Center)
Analysed 1200 patients for 3 months
107. Figure 4. Participating Pathologists’ Interpretations of Each of the 240 Breast Biopsy Test Cases
0 25 50 75 100
Interpretations, %
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
50
52
54
56
58
60
62
64
66
68
70
72
Case
Benign without atypia
72 Cases
2070 Total interpretations
A
0 25 50 75 100
Interpretations, %
218
220
222
224
226
228
230
232
234
236
238
240
Case
Invasive carcinoma
23 Cases
663 Total interpretations
D
0 25 50 75 100
Interpretations, %
147
145
149
151
153
155
157
159
161
163
165
167
169
171
173
175
177
179
181
183
185
187
189
191
193
195
197
199
201
203
205
207
209
211
213
215
217
Case
DCIS
73 Cases
2097 Total interpretations
C
0 25 50 75 100
Interpretations, %
74
76
78
80
82
84
86
88
90
92
94
96
98
100
102
104
106
108
110
112
114
116
118
120
122
124
126
128
130
132
134
136
138
140
142
144
Case
Atypia
72 Cases
2070 Total interpretations
B
Benign without atypia
Atypia
DCIS
Invasive carcinoma
Pathologist interpretation
DCIS indicates ductal carcinoma in situ.
Diagnostic Concordance in Interpreting Breast Biopsies Original Investigation Research
Elmore etl al. JAMA 2015
Diagnostic Concordance Among Pathologists
Interpreting Breast Biopsy Specimens
The overall agreement between the individual pathologists’
interpretations and the expert consensus–derived reference
diagnoses was 75.3% (total 240 cases)
108. Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Building an epithelial/stromal classifier:
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
B
Basic image processing and feature construction:
H&E image Image broken into superpixels Nuclei identified within
each superpixel
A
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients Processed images from patients
C
D
onNovember17,2011stm.sciencemag.orgwnloadedfrom
TMAs contain 0.6-mm-diameter cores (median
of two cores per case) that represent only a small
sample of the full tumor. We acquired data from
two separate and independent cohorts: Nether-
lands Cancer Institute (NKI; 248 patients) and
Vancouver General Hospital (VGH; 328 patients).
Unlike previous work in cancer morphom-
etry (18–21), our image analysis pipeline was
not limited to a predefined set of morphometric
features selected by pathologists. Rather, C-Path
measures an extensive, quantitative feature set
from the breast cancer epithelium and the stro-
ma (Fig. 1). Our image processing system first
performed an automated, hierarchical scene seg-
mentation that generated thousands of measure-
ments, including both standard morphometric
descriptors of image objects and higher-level
contextual, relational, and global image features.
The pipeline consisted of three stages (Fig. 1, A
to C, and tables S8 and S9). First, we used a set of
processing steps to separate the tissue from the
background, partition the image into small regions
of coherent appearance known as superpixels,
find nuclei within the superpixels, and construct
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients
alive at 5 years
Processed images from patients
deceased at 5 years
L1-regularized
logisticregression
modelbuilding
5YS predictive model
Unlabeled images
Time
P(survival)
C
D
Identification of novel prognostically
important morphologic features
basic cellular morphologic properties (epithelial reg-
ular nuclei = red; epithelial atypical nuclei = pale blue;
epithelial cytoplasm = purple; stromal matrix = green;
stromal round nuclei = dark green; stromal spindled
nuclei = teal blue; unclassified regions = dark gray;
spindled nuclei in unclassified regions = yellow; round
nuclei in unclassified regions = gray; background =
white). (Left panel) After the classification of each
image object, a rich feature set is constructed. (D)
Learning an image-based model to predict survival.
Processed images from patients alive at 5 years after
surgery and from patients deceased at 5 years after
surgery were used to construct an image-based prog-
nostic model. After construction of the model, it was
applied to a test set of breast cancer images (not
used in model building) to classify patients as high
or low risk of death by 5 years.
www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
109. Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
Top stromal features associated with survival.
primarily characterizing epithelial nuclear characteristics, such as
size, color, and texture (21, 36). In contrast, after initial filtering of im-
ages to ensure high-quality TMA images and training of the C-Path
models using expert-derived image annotations (epithelium and
stroma labels to build the epithelial-stromal classifier and survival
time and survival status to build the prognostic model), our image
analysis system is automated with no manual steps, which greatly in-
creases its scalability. Additionally, in contrast to previous approaches,
our system measures thousands of morphologic descriptors of diverse
identification of prognostic features whose significance was not pre-
viously recognized.
Using our system, we built an image-based prognostic model on
the NKI data set and showed that in this patient cohort the model
was a strong predictor of survival and provided significant additional
prognostic information to clinical, molecular, and pathological prog-
nostic factors in a multivariate model. We also demonstrated that the
image-based prognostic model, built using the NKI data set, is a strong
prognostic factor on another, independent data set with very different
SD of the ratio of the pixel intensity SD to the mean intensity
for pixels within a ring of the center of epithelial nuclei
A
The sum of the number of unclassified objects
SD of the maximum blue pixel value for atypical epithelial nuclei
Maximum distance between atypical epithelial nuclei
B
C
D
Maximum value of the minimum green pixel intensity value in
epithelial contiguous regions
Minimum elliptic fit of epithelial contiguous regions
SD of distance between epithelial cytoplasmic and nuclear objects
Average border between epithelial cytoplasmic objects
E
F
G
H
Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap anal-
ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD
of the (SD of intensity/mean intensity) for pixels within a ring of the center
of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low
score); right, great nuclear intensity diversity (high score). (B) Sum of the
number of unclassified objects. Red, epithelial regions; green, stromal re-
gions; no overlaid color, unclassified region. Left, few unclassified objects
(low score); right, higher number of unclassified objects (high score). (C) SD
of the maximum blue pixel value for atypical epithelial nuclei. Left, high
score; right, low score. (D) Maximum distance between atypical epithe-
lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial
nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial
contiguous regions. Left, high score; right, low score. (F) SD of distance
between epithelial cytoplasmic and nuclear objects. Left, high score; right,
low score. (G) Average border between epithelial cytoplasmic objects. Left,
high score; right, low score. (H) Maximum value of the minimum green
pixel intensity value in epithelial contiguous regions. Left, low score indi-
cating black pixels within epithelial region; right, higher score indicating
presence of epithelial regions lacking black pixels.
onNovember17,2011stm.sciencemag.orgDownloadedfrom
and stromal matrix throughout the image, with thin cords of epithe-
lial cells infiltrating through stroma across the image, so that each
stromal matrix region borders a relatively constant proportion of ep-
ithelial and stromal regions. The stromal feature with the second
largest coefficient (Fig. 4B) was the sum of the minimum green in-
tensity value of stromal-contiguous regions. This feature received a
value of zero when stromal regions contained dark pixels (such as
inflammatory nuclei). The feature received a positive value when
stromal objects were devoid of dark pixels. This feature provided in-
formation about the relationship between stromal cellular composi-
tion and prognosis and suggested that the presence of inflammatory
cells in the stroma is associated with poor prognosis, a finding con-
sistent with previous observations (32). The third most significant
stromal feature (Fig. 4C) was a measure of the relative border between
spindled stromal nuclei to round stromal nuclei, with an increased rel-
ative border of spindled stromal nuclei to round stromal nuclei asso-
ciated with worse overall survival. Although the biological underpinning
of this morphologic feature is currently not known, this analysis sug-
gested that spatial relationships between different populations of stro-
mal cell types are associated with breast cancer progression.
Reproducibility of C-Path 5YS model predictions on
samples with multiple TMA cores
For the C-Path 5YS model (which was trained on the full NKI data
set), we assessed the intrapatient agreement of model predictions when
predictions were made separately on each image contributed by pa-
tients in the VGH data set. For the 190 VGH patients who contributed
two images with complete image data, the binary predictions (high
or low risk) on the individual images agreed with each other for 69%
(131 of 190) of the cases and agreed with the prediction on the aver-
aged data for 84% (319 of 380) of the images. Using the continuous
prediction score (which ranged from 0 to 100), the median of the ab-
solute difference in prediction score among the patients with replicate
images was 5%, and the Spearman correlation among replicates was
0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is
only moderate, and these findings suggest significant intrapatient tumor
heterogeneity, which is a cardinal feature of breast carcinomas (33–35).
Qualitative visual inspection of images receiving discordant scores
suggested that intrapatient variability in both the epithelial and the
stromal components is likely to contribute to discordant scores for
the individual images. These differences appeared to relate both to
the proportions of the epithelium and stroma and to the appearance
of the epithelium and stroma. Last, we sought to analyze whether sur-
vival predictions were more accurate on the VGH cases that contributed
multiple cores compared to the cases that contributed only a single
core. This analysis showed that the C-Path 5YS model showed signif-
icantly improved prognostic prediction accuracy on the VGH cases
for which we had multiple images compared to the cases that con-
tributed only a single image (Fig. 7). Together, these findings show
a significant degree of intrapatient variability and indicate that increased
tumor sampling is associated with improved model performance.
DISCUSSION
Heat map of stromal matrix
objects mean abs.diff
to neighbors
H&E image separated
into epithelial and
stromal objects
A
B
C
Worse
prognosis
Improved
prognosis
Improved
prognosis
Improved
prognosis
Worse
prognosis
Worse
prognosis
Fig. 4. Top stromal features associated with survival. (A) Variability in ab-
solute difference in intensity between stromal matrix regions and neigh-
bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets)
Top panel, high score; bottom panel; low score. Right panels, stromal matrix
objects colored blue (low), green (medium), or white (high) according to
each object’s absolute difference in intensity to neighbors. (B) Presence
R E S E A R C H A R T I C L E
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Top epithelial features.The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap
anal- ysis. Left panels, improved prognosis; right panels, worse prognosis.
115. In an early research project involving 600 patient cases, the team was able to
predict near-term hypoglycemic events up to 3 hours in advance of the symptoms.
IBM Watson-Medtronic
Jan 7, 2016
126. 아무도 원하지 않는 제품을 만들고 있는 것은 아닌가?
• 진짜 니즈가 무엇인지 파악하라
• 고객들이 원한다고 말하는 것 (X)
• 고객들이 원한다고 당신이 생각하는 것 (X)
• 실제로 진짜 고객들이 원하는 것 (O)
• 무엇이 가능한지 모르기 때문에, 고객은 스스로 무엇을 원하는지 모를 것이다.
141. 의료적 관점에서도 동의할 수 있는 해결책인가
• 의료 전문가 (의사)의 조언이 필요하다.
• 과학적/의학적 설득력이 없는 (a.k.a. 사이비) 서비스/제품은 곤란하다.
• 의료 현실에 맞지 않는 서비스는 외면 당하거나, 극심한 반대에 부딪힌다.
• 직원 중에 의사가 꼭 있을 필요는 없지만, 언제든 조언을 얻을 수 있는 분은 필요하다.
• 의사들 사이에서도 성향 차이 / 의견 차이가 존재한다.
142.
143. 한국 의료 시스템의 특수성을 이해하라
• 한국 의료 체계는 미국과는 크게 다르다.
• 국내 의료 시스템의 특성을 명확히 파악할 필요가 있다.
• 의료 접근성, 의료 보험 체계, 의료 수가 등등
• 미국에서 통했던 것이, 한국에서는 통하지 않거나 / 아예 불법일 수 있다.
• 그렇다고 꼭 국내 시장에 국한될 필요는 없다.
150. • 헬스케어/의료 서비스는 근거가 필수적이다.
• 하지만 그렇지 못한 것이 현실
applications, from photometric diagnostics to
medical-grade imaging (16).Taking advantage of
these properties, newly developed devices permit
the automated determination of refractive error
merely by having an individual look through a
lens attached to a smartphone (17). Another
transportable imaging capability involves the
enabling of remote diagnosis through the use of
a smartphone case with an attached otoscope
(for detecting an ear infection) (18), multimodal
colposcope for cervical cancer identification (19),
or optical screening tool for potentially cancerous
oral lesions (20). Dermatologic diagnostics may be
especially well suited for exploiting the myriad
smartphone capabilities for teledermatology (21).
The technologies highlighted above can improve
care simply through their ability to markedly in-
crease the accessibility and convenience of care
by bringing clinic- and hospital-quality moni-
toring and diagnostics to the point of need. How-
ever, their greatest potential might be in allowing
for the complete redefining of “normal” physio-
logical responses and in enhancing our under-
standing of the natural histories of poorly defined
chronic conditions. Continuous beat-to-beat moni-
toring of blood pressure throughout daily activities
will help to refine the catchall diagnosis of “essential hypertension” as
multiple distinct phenotypes. Similarly, understanding individual varia-
views conclude that high-quality evidence is lacking for the use of
mHealth to effect behavioral changes or to manage chronic diseases,
1000
Funding ($) in millions
Publications
Funding($inmillions)
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
800
600
400
200
0
10,000
8000
6000
4000
2000
0
WoSpublications(number)
Fig. 2. mHealth taking center stage. Measures are funding and number of related publications.
Shown are the annual total funding for patient-facing mHealth companies and the annual num-
ber of related publications [identified with Web of Science (WoS) using search terms “telemedi-
cine” and “mhealth*” and “digital health” and “digital medicine”]. Funding data provided by
B. Dolan and A. Pai of MobiHealthNews.
R E V I E W
onApril27,2015emag.org
근거를 만들어야 한다.
151. 근거를 만들어야 한다.
• 의료 기관과의 협업이 필요할 가능성이 높다.
• 하지만 의료 기관과 일하기 쉽지 않다.
• Right person, Right hospital, Right department, Right time…
• 의사의 관심과 스타트업의 관심사가 다르다.
• 의사와 스타트업의 공통점: 리소스가 턱없이 부족하다.
• 가장 좋은 근거는 역시 임상 연구 결과
• 연구 조건은 case by case.
• Randomised, Double-blinded, controlled trial.
• 충분한 N 수, 충분한 기간
153. 헬스케어는 규제 산업이다
• 규제는 본질적으로 기술의 발전을 뒤따를 수 밖에 없다.
• 국내 규제 상황은 별로 좋지 않다.
• 합리성, 일관성, 불확실성
• 싫든 좋든, 규제를 개척하는 것도 역할의 하나이다.
• 초기에 식약처 등 관련 기관을 컨택하는 것도 필요하다.
159. 헬스케어 도메인 전문가
의사/병원과의 협력
헬스케어 창업 및 exit 경험 있는 기업가
초기 투자 등 자금 조달 전문가
제조 기술 전문가 및 지원 서비스
해외 시장 개척 및 해외 투자 유치 지원
0 2 4 6 8 10 12 14 16 18
디지털 헬스케어 엑셀러레이터에게 가장 필요한 것은?
Source: Mobile Healthcare | 웨어러블 디바이스, 모바일 헬스케어
https://www.facebook.com/groups/koreamobilehealthcare/