3D object classification can be realised by rendering views of the same object from different angles and aggregating all the views to build a classifier. Although this approach has been previously proposed for general objects classification, most existing works did not consider visual impairments. In contrast, this paper considers the problem of 3D object classification for driving applications under impairments (e.g. occlusion and sensor noise) by generating an application-specific dataset. We present a cooperative object classification method where multiple images of the same object seen from different perspectives (agents) are exploited to generate more accurate classification. We consider model generalisation capability and its resilience to impairments. We introduce an occlusion model with higher resemblance to real-world occlusion and use a simplified sensor noise model. The experimental results show that the cooperative model, relying on multiple views, significantly outperforms single-view methods and is effective in mitigating the effects of occlusion and sensor noise.