The fast deployment of sensors, actuators and smart objects in Internet of Things (IoTs) allows for the creation of new smart services through a fine-grained data acquisition process. These smart services, apart from their various reported benefits, opens up the potential of abuse and poses privacy risks. For instance, personal intimate data can be used by anti-social elements without the informed consent of the individuals for malicious purposes—with users not even knowing about the presence and capabilities of deployed smart devices and what data related to them is being captured. In addition, many of the IoT smart services leverage Machine Learning (ML) techniques (for tasks such as voice/speech recognition; face recognition; crime prediction). Unfortunately, such ML models have been shown to be vulnerable to adversarial examples, which are carefully crafted examples that have been created to fool an ML model. The purpose of this project is to address the urgent need of the times to carefully evaluate and address the security and privacy risks of ML-driven IoTs for smart services before the wide adoption of the technology and to propose pro-social responsible solutions for ML-driven IoTs.