The aim of this project is to design an energy-efficient smart vision sensor featuring artificial intelligence (AI) capabilities integrated side-by-side with the sensory array. The vision sensor will feature for the first time the co-design of novel artificial intelligence (AI) algorithms, computationally efficient hardware with the multi-feature vision sensor. The proposed integrated solution aims at bringing the smart vision system into the next intelligence level with high robustness, low cost, low power, and small latency. The innovative aspects of the proposed smart vision sensor include: i. Hardware-friendly AI algorithms with event-based processing. ii. New deep computing architecture with energy-efficient mixed-signal processing. iii. Event-based vision sensor with embedded intelligence (i.e. built-in feature extraction). iv. Novel vision system integration methodologies with application-specific optimization. To the best of our knowledge, the proposed vision system is the first to include innovations from all these aspects in building a future vision chip. The chip can be deployed for a wide range of applications from security, surveillance, medical, logistics to assisted living (visually impaired and blind) and assisted driving (ADAS systems in future autonomous driving vehicles). It is anticipated that this work will lead to patentable outcomes, a high-quality publishable set of contributions and the training of postgraduate students in an increasingly popular and emerging research area. It will help improve elderly healthcare, road safety and travel comfort in addition to the realization of future smart cities. The findings in this project related to the wide application of AI are directly aligned with the priority theme of QNRS. The research presented in this proposal will be undertaken with a clear stake in the commercial uptake and a mission to support an emerging AI industrial sector for the benefit of Qatar’s knowledge-based economy and community.