Autonomous vehicles primary need is to have a robust and precise perception of the surrounding environment. To achieve accurate results in autonomous driving a combination of several different sensors is used. This technique is commonly known as sensor fusion. However, a critical aspect of autonomous driving perception is also the reactivity: we do not only need to perform sensor fusion properly, but we also have to respect real time requirements. The latter are fundamental to guarantee predictability of the system, avoid anomalous situations and prevent hazards. Performing fusion online is not trivial because sensors returns a lot of data and process them may be time consuming. The solution we present is an alignment of the sensors, which combines information from LiDAR and cameras, that is performed once in preprocessing and allow us to exploit the precomputed matching in real time.