怎么做个人网站村级门户网站建设

张小明 2026/3/13 4:35:39
怎么做个人网站,村级门户网站建设,小程序开发和网站开发的区别,杭州百度代理公司CARLA自动驾驶仿真环境搭建与DEMO详解一、概述1、什么是CARLA#xff1f;2、为什么需要CARLA#xff1f;二、效果三、环境搭建1、Ubuntu 22.04 环境#xff08;推荐#xff09;1.1、Docker方式#xff08;最简单#xff09;2、Windows环境2.1、Python3.8.0包安装方式四、…CARLA自动驾驶仿真环境搭建与DEMO详解一、概述1、什么是CARLA2、为什么需要CARLA二、效果三、环境搭建1、Ubuntu 22.04 环境推荐1.1、Docker方式最简单2、Windows环境2.1、Python3.8.0包安装方式四、DEMO程序1、功能说明2、运行流程3、传感器类型3.1、RGB摄像头3.2、激光雷达LiDAR4、摄像头布局5、代码五、实际应用场景1. 算法开发与测试2. 传感器融合研究3. 安全性测试4. 数据集生成六、扩展学习1、进阶功能2、相关工具七、总结八、参考资源一、概述1、什么是CARLACARLA是一个开源的自动驾驶仿真平台由Intel实验室和巴塞罗那自治大学计算机视觉中心共同开发。它基于虚幻引擎Unreal Engine构建为自动驾驶系统的开发、训练和验证提供了一个逼真的虚拟环境。主要特点开源免费完全开源社区活跃逼真环境高精度的城市环境、车辆模型和传感器模拟灵活接口支持Python API易于集成多种传感器摄像头、激光雷达、雷达、GPS等️多种地图包括城市、乡村、高速公路等场景2、为什么需要CARLA在真实的自动驾驶系统开发中面临着诸多挑战安全风险实际道路测试存在安全隐患成本高昂传感器、车辆和维护费用昂贵场景限制难以复现各种极端情况法规限制实际测试受法律法规约束CARLA通过仿真解决了这些问题让研究人员可以在安全、可控的环境中开发和测试算法。二、效果三、环境搭建1、Ubuntu 22.04 环境推荐1.1、Docker方式最简单# 1. 停止并移除已有的carla容器如果存在docker stop carla dockerrmcarla# 2. 运行新的carla容器docker run --gpus all --name carla --shm-size128g -it -eNVIDIA_VISIBLE_DEVICESall\--privileged --nethost -v$PWD:/workspace carlasim/carla:0.9.15bash# 3. 在容器内启动CARLA服务器无界面模式./CarlaUE4.sh -RenderOffScreen -nosound参数解释--gpus all使用所有GPUCARLA需要GPU加速渲染--shm-size128g设置共享内存大小CARLA需要大量内存--privileged给予容器特权访问硬件设备--nethost使用主机网络方便通信-v $PWD:/workspace挂载当前目录到容器的/workspace2、Windows环境2.1、Python3.8.0包安装方式# 安装CARLA Python客户端库pipinstallcarla0.9.15# 安装必要的依赖库pipinstallopencv-python4.12.0.88pygame2.6.1 pipinstallnumpy1.22四、DEMO程序1、功能说明检测周围的车辆、行人、交通标志计算它们在各个传感器视野中的位置将3D边界框投影到2D图像上进行可视化2、运行流程初始化连接连接到CARLA服务器创建车辆在随机位置生成主控车辆安装传感器安装6个摄像头和1个激光雷达启动交通生成其他交通车辆主循环世界更新tick传感器数据采集障碍物检测与可视化界面渲染更新3、传感器类型3.1、RGB摄像头模拟真实的相机拍摄效果可以调整分辨率、视场角(FOV)输出RGB图像数据3.2、激光雷达LiDAR模拟激光雷达的点云数据可配置通道数、扫描范围、旋转频率输出3D点云数据4、摄像头布局# 6个相机的安装位置模拟环视系统CAMERA_POS{camera_front:[0,1],# 前视camera_right_front:[0,2],# 右前camera_left_front:[0,0],# 左前camera_rear:[1,1],# 后视camera_left_rear:[1,0],# 左后camera_right_rear:[1,2]# 右后}这种布局模拟了自动驾驶车辆常见的环视系统提供360度的环境感知能力。5、代码importcv2importnumpyasnpimportosimportjsonimporttimefromdatetimeimportdatetimefromtypingimportTuple,List,DictfrompyquaternionimportQuaternionimportglobimportsysimportcarlaimportargparseimportrandomimportmathimportweakrefimportpygamefrompygame.localsimportK_ESCAPEfrompygame.localsimportK_qfromscipy.spatial.transformimportRotationasRfromscipy.spatial.transformimportRotation BB_COLOR(248,64,24)classDisplayManager:def__init__(self,grid_size,window_size):pygame.init()pygame.font.init()self.displaypygame.display.set_mode(window_size,pygame.HWSURFACE|pygame.DOUBLEBUF)self.grid_sizegrid_size self.window_sizewindow_size self.sensor_list[]defget_window_size(self):return[int(self.window_size[0]),int(self.window_size[1])]defget_display_size(self):return[int(self.window_size[0]/self.grid_size[1]),int(self.window_size[1]/self.grid_size[0])]defget_display_offset(self,gridPos):dis_sizeself.get_display_size()return[int(gridPos[1]*dis_size[0]),int(gridPos[0]*dis_size[1])]defadd_sensor(self,sensor):self.sensor_list.append(sensor)defget_sensor_list(self):returnself.sensor_listdefrender(self):ifnotself.render_enabled():returnforsinself.sensor_list:s.render()pygame.display.flip()defdestroy(self):forsinself.sensor_list:s.destroy()defrender_enabled(self):returnself.display!NoneclassSensorManager:def__init__(self,world,display_man,sensor_type,transform,attached,sensor_options,display_pos):self.surfaceNoneself.worldworld self.display_mandisplay_man self.display_posdisplay_pos self.sensor_typesensor_type self.sensorself.init_sensor(sensor_type,transform,attached,sensor_options)self.sensor_optionssensor_options self.display_man.add_sensor(self)definit_sensor(self,sensor_type,transform,attached,sensor_options):ifsensor_typeRGBCamera:camera_bpself.world.get_blueprint_library().find(sensor.camera.rgb)disp_sizeself.display_man.get_display_size()image_size_xstr(disp_size[0])image_size_ystr(disp_size[1])camera_bp.set_attribute(image_size_x,image_size_x)camera_bp.set_attribute(image_size_y,image_size_y)camera_bp.set_attribute(fov,str(90))cameraself.world.spawn_actor(camera_bp,transform,attach_toattached)camera.listen(self.save_rgb_image)calibrationnp.identity(3)calibration[0,2]disp_size[0]/2.0calibration[1,2]disp_size[1]/2.0calibration[0,0]calibration[1,1]disp_size[0]/(2.0*np.tan(90*np.pi/360.0))camera.calibrationcalibrationreturncameraelifsensor_typeLiDAR:lidar_bpself.world.get_blueprint_library().find(sensor.lidar.ray_cast)lidar_bp.set_attribute(range,100)lidar_bp.set_attribute(dropoff_general_rate,lidar_bp.get_attribute(dropoff_general_rate).recommended_values[0])lidar_bp.set_attribute(dropoff_intensity_limit,lidar_bp.get_attribute(dropoff_intensity_limit).recommended_values[0])lidar_bp.set_attribute(dropoff_zero_intensity,lidar_bp.get_attribute(dropoff_zero_intensity).recommended_values[0])forkeyinsensor_options:lidar_bp.set_attribute(key,sensor_options[key])lidarself.world.spawn_actor(lidar_bp,transform,attach_toattached)lidar.listen(self.save_lidar_image)returnlidarelse:returnNonedefget_sensor(self):returnself.sensordefdraw_bounding_boxes(self,bounding_boxes): Draws bounding boxes on pygame display. bb_surfaceself.surfaceifbb_surfaceisNone:returnbb_surface.set_colorkey((0,0,0))forbboxinbounding_boxes:points[(int(bbox[i,0]),int(bbox[i,1]))foriinrange(8)]# draw lines# basepygame.draw.line(bb_surface,BB_COLOR,points[0],points[1])pygame.draw.line(bb_surface,BB_COLOR,points[0],points[1])pygame.draw.line(bb_surface,BB_COLOR,points[1],points[2])pygame.draw.line(bb_surface,BB_COLOR,points[2],points[3])pygame.draw.line(bb_surface,BB_COLOR,points[3],points[0])# toppygame.draw.line(bb_surface,BB_COLOR,points[4],points[5])pygame.draw.line(bb_surface,BB_COLOR,points[5],points[6])pygame.draw.line(bb_surface,BB_COLOR,points[6],points[7])pygame.draw.line(bb_surface,BB_COLOR,points[7],points[4])# base-toppygame.draw.line(bb_surface,BB_COLOR,points[0],points[4])pygame.draw.line(bb_surface,BB_COLOR,points[1],points[5])pygame.draw.line(bb_surface,BB_COLOR,points[2],points[6])pygame.draw.line(bb_surface,BB_COLOR,points[3],points[7])defsave_rgb_image(self,image):image.convert(carla.ColorConverter.Raw)arraynp.frombuffer(image.raw_data,dtypenp.dtype(uint8))arraynp.reshape(array,(image.height,image.width,4))arrayarray[:,:,:3]arrayarray[:,:,::-1]disp_sizeself.display_man.get_display_size()arraycv2.resize(array,(disp_size[0],disp_size[1]))self.raw_imagearray.copy()ifself.display_man.render_enabled():self.surfacepygame.surfarray.make_surface(array.swapaxes(0,1))defsave_lidar_image(self,image):disp_sizeself.display_man.get_display_size()lidar_range2.0*float(self.sensor_options[range])pointsnp.frombuffer(image.raw_data,dtypenp.dtype(f4))pointsnp.reshape(points,(int(points.shape[0]/4),4))lidar_datanp.array(points[:,:2])lidar_data*min(disp_size)/lidar_range lidar_data(0.5*disp_size[0],0.5*disp_size[1])lidar_datanp.fabs(lidar_data)# pylint: disableE1111lidar_datalidar_data.astype(np.int32)lidar_datanp.reshape(lidar_data,(-1,2))lidar_img_size(disp_size[0],disp_size[1],3)lidar_imgnp.zeros((lidar_img_size),dtypenp.uint8)lidar_img[tuple(lidar_data.T)](255,255,255)ifself.display_man.render_enabled():self.surfacepygame.surfarray.make_surface(lidar_img)defrender(self):ifself.surfaceisnotNone:offsetself.display_man.get_display_offset(self.display_pos)self.display_man.display.blit(self.surface,offset)defdestroy(self):self.sensor.destroy()classVehicleInfoManager:def__init__(self,world,display_man,vehicle,display_pos):self.worldworld self.display_mandisplay_man self.display_posdisplay_pos self.vehiclevehicle# 初始化字体pygame.font.init()font_namecourierifos.namentelsemonofonts[xforxinpygame.font.get_fonts()iffont_nameinx]default_fontubuntumonomonodefault_fontifdefault_fontinfontselsefonts[0]monopygame.font.match_font(mono)self.fontpygame.font.Font(mono,16)self.sensor_typeVehicleInfoManagerself.display_man.add_sensor(self)defrender(self):try:# 获取车辆信息transformself.vehicle.get_transform()velocityself.vehicle.get_velocity()controlself.vehicle.get_control()# 计算速度km/hspeed3.6*math.sqrt(velocity.x**2velocity.y**2velocity.z**2)# 获取显示区域的大小和偏移display_sizeself.display_man.get_display_size()offsetself.display_man.get_display_offset(self.display_pos)# 创建表面surfacepygame.Surface(display_size,pygame.SRCALPHA)surface.fill((0,0,0,150))# 半透明黑色背景# 渲染文本text_lines[fVehicle Speed:{speed:.1f}km/h,fLocation: ({transform.location.x:.1f},{transform.location.y:.1f},{transform.location.z:.1f}),fRotation: (Pitch:{transform.rotation.pitch:.1f}, Yaw:{transform.rotation.yaw:.1f}, Roll:{transform.rotation.roll:.1f}),fThrottle:{control.throttle:.2f},fSteer:{control.steer:.2f},fBrake:{control.brake:.2f},fReverse:{control.reverse},fHand Brake:{control.hand_brake},fGear:{control.gear}]y_offset20forlineintext_lines:textself.font.render(line,True,(255,255,255))surface.blit(text,(20,y_offset))y_offset30# 绘制到显示管理器self.display_man.display.blit(surface,offset)exceptExceptionase:print(fError rendering vehicle info:{e})defdestroy(self):passdefcreate_bb_points(actor):为actor创建3D bounding box点ifhasattr(actor,bounding_box):cordsnp.zeros((8,4))extentactor.bounding_box.extent cords[0,:]np.array([extent.x,extent.y,-extent.z,1])cords[1,:]np.array([-extent.x,extent.y,-extent.z,1])cords[2,:]np.array([-extent.x,-extent.y,-extent.z,1])cords[3,:]np.array([extent.x,-extent.y,-extent.z,1])cords[4,:]np.array([extent.x,extent.y,extent.z,1])cords[5,:]np.array([-extent.x,extent.y,extent.z,1])cords[6,:]np.array([-extent.x,-extent.y,extent.z,1])cords[7,:]np.array([extent.x,-extent.y,extent.z,1])returncordsreturnNonedefapollo_lidar2cam_to_carla_transform(lidar2cam): 将Apollo平台的lidar2cam矩阵转换为CARLA的Transform Args: lidar2cam: 4x4变换矩阵从Apollo激光雷达到相机 Returns: (location, rotation): 相机相对于雷达的位置和旋转 # 提取旋转和平移R_apollolidar2cam[:3,:3]t_apollolidar2cam[:3,3]# Apollo激光雷达到CARLA激光雷达的转换# Apollo Lidar: X前, Y左, Z上# CARLA Lidar: X前, Y右, Z上# 所以需要将Y轴反向R_apollo_lidar_to_carla_lidarnp.array([[1,0,0],# X轴不变[0,-1,0],# Y轴反向[0,0,1]# Z轴不变])# Apollo相机到CARLA相机的转换# 常见Apollo相机坐标系X右, Y下, Z前# CARLA相机坐标系X前, Y右, Z上# 需要验证您的具体配置R_apollo_cam_to_carla_camnp.array([[0,0,1],# Apollo的Z(前) - CARLA的X(前)[1,0,0],# Apollo的X(右) - CARLA的Y(右)[0,-1,0]# Apollo的Y(下) - CARLA的Z(上)取反])# 计算CARLA坐标系下的变换矩阵R_carlaR_apollo_cam_to_carla_cam R_apollo R_apollo_lidar_to_carla_lidar.T t_carlaR_apollo_cam_to_carla_cam t_apollo# 将旋转矩阵转换为欧拉角rotationR.from_matrix(R_carla)euler_anglesrotation.as_euler(ZYX,degreesTrue)# CARLA使用ZYX顺序# 转换为CARLA的Rotation格式: (pitch, yaw, roll)# euler_angles顺序: [yaw, pitch, roll] (ZYX)rolleuler_angles[2]# 绕X轴yaweuler_angles[0]# 绕Z轴pitcheuler_angles[1]# 绕Y轴# 调整yaw方向由于从右手系转到左手系yaw方向会反转# 在右手系中正yaw是逆时针从左看# 在左手系中正yaw是顺时针从右看# 所以需要取反yaw-yawreturnt_carla,(pitch,yaw,roll)camera_configs_str [ { camera_external: [ 0.0769339508198897, -0.2200085016053469, 0.9724594728998291, 0.0, -0.9962485995120333, 0.021797081417668097, 0.08374732958121199, 0.0, -0.03962190280079179, -0.9752544008939626, -0.21750622601526376, 0.0, -0.05985818412087545, -0.02006400697329872, -1.10083029171711, 1.0 ] }, { camera_external: [ -0.9236677363748564, -0.06344835072795686, 0.3779050404136608, 0.0, -0.37968565503868806, 0.2846895612921289, -0.8802219362462725, 0.0, -0.05173699003605453, -0.9565177261719704, -0.2870489912555204, 0.0, 0.5485629895545788, 0.02486425266606691, -1.2679619215543618, 1.0 ] }, { camera_external: [ 0.9639149065693595, 0.0027616073051752164, 0.26619621788912523, 0.0, -0.2510631785739221, -0.3230607045314356, 0.9124686633260406, 0.0, 0.088517417821249, -0.9463742149449892, -0.31070969091663997, 0.0, -0.5956841659726075, 0.028962591874081182, -1.0325612676086606, 1.0 ] }, { camera_external: [ -0.08101152051215454, 0.2911864472551092, -0.9532300805572891, 0.0, 0.9959281413554327, -0.014301527871016557, -0.08900900829055367, 0.0, -0.03955086346240828, -0.9565594175144073, -0.2888421886841946, 0.0, -0.04908311501294596, 0.07121003220643372, -1.1817303072419392, 1.0 ] }, { camera_external: [ 0.8996255182409487, 0.00621075263095839, -0.43661808652581163, 0.0, 0.421159019810905, -0.2764118757320042, 0.8638411630538526, 0.0, -0.11532132049895125, -0.9610191993431098, -0.2512828914436995, 0.0, 0.4765084156415575, -0.04302769576922575, -1.02887686607957, 1.0 ] }, { camera_external: [ -0.9632748484429129, 0.03353001133987921, -0.2664156615081599, 0.0, 0.2677101747206758, 0.1967963133883053, -0.9431874009908088, 0.0, 0.0208045357628595, -0.9798708840333605, -0.1985452641474292, 0.0, -0.6243595244819944, 0.025188591843020802, -1.1158441130296437, 1.0 ] } ] #with open(00000000.json,r) as f:# camera_configsjson.load(f)camera_configsjson.loads(camera_configs_str)CAMERA_ORDER[camera_front,camera_right_front,camera_left_front,camera_rear,camera_left_rear,camera_right_rear]CAMERA_POS{camera_front:[0,1],camera_right_front:[0,2],camera_left_front:[0,0],camera_rear:[1,1],camera_left_rear:[1,0],camera_right_rear:[1,2]}clientcarla.Client(127.0.0.1,2000)client.set_timeout(5.0)worldclient.get_world()original_settingsworld.get_settings()traffic_managerclient.get_trafficmanager(8000)settingsworld.get_settings()traffic_manager.set_synchronous_mode(True)settings.synchronous_modeTruesettings.fixed_delta_seconds0.05world.apply_settings(settings)traffic_manager.set_synchronous_mode(True)display_managerNonevehicleNonevehicle_list[]blueprintrandom.choice(world.get_blueprint_library().filter(vehicle.audi.a2))blueprint.set_attribute(role_name,hero)ifblueprint.has_attribute(color):colorrandom.choice(blueprint.get_attribute(color).recommended_values)blueprint.set_attribute(color,color)vehicleNonemapworld.get_map()whilevehicleisNone:spawn_pointsmap.get_spawn_points()spawn_pointrandom.choice(spawn_points)ifspawn_pointselsecarla.Transform()vehicleworld.try_spawn_actor(blueprint,spawn_point)physics_controlvehicle.get_physics_control()physics_control.use_sweep_wheel_collisionTruevehicle.apply_physics_control(physics_control)vehicle_list.append(vehicle)vehicle.set_autopilot(True)display_managerDisplayManager(grid_size[3,3],window_size[1920,1080])# LiDAR传感器安装高度 (z轴坐标)lidarSensorManager(world,display_manager,LiDAR,carla.Transform(carla.Location(z3)),vehicle,{channels:64,range:100,points_per_second:250000,rotation_frequency:20},display_pos[2,0])fori,camera_configinenumerate(camera_configs):camera_nameCAMERA_ORDER[i]# 获取相机默认外参lidar2cam_rtnp.array(camera_config[camera_external],dtypenp.float32).reshape((4,4),orderF)ifi0:print(lidar2cam_rt)location,rotationapollo_lidar2cam_to_carla_transform(lidar2cam_rt)relative_transformcarla.Transform(carla.Location(xfloat(location[0]),yfloat(location[1]),zfloat(location[2])),carla.Rotation(pitchfloat(rotation[0]),yawfloat(rotation[1]),rollfloat(rotation[2])))print(i,camera_name,CAMERA_POS[camera_name])SensorManager(world,display_manager,RGBCamera,relative_transform,lidar.sensor,{},display_posCAMERA_POS[camera_name])# 全局俯瞰整个地图的相机SensorManager(world,display_manager,RGBCamera,carla.Transform(carla.Location(x0,y0,z50),carla.Rotation(pitch-90)),lidar.sensor,{image_size_x:1280,image_size_y:720},display_pos[2,1])# 将前的速度、转向、刹车等信息 显示在 display_pos[2, 0]VehicleInfoManager(world,display_manager,vehicle,display_pos[2,2])actor_list[]blueprint_libraryworld.get_blueprint_library()# 生成其他交通车辆foriinrange(5):spawn_pointrandom.choice(world.get_map().get_spawn_points())bprandom.choice(blueprint_library.filter(vehicle))npcworld.try_spawn_actor(bp,spawn_point)ifnpcisnotNone:actor_list.append(npc)npc.set_autopilot(True,traffic_manager.get_port())print(created %s%npc.type_id)call_exitFalsecounter0whileTrue:world.tick()# 获取当前车辆位置周边的车辆、行人、交通灯的bounding box,并转换为雷达坐标系下的坐标current_locationvehicle.get_location()# 获取世界中所有的车辆、行人和交通灯all_actorsworld.get_actors()vehiclesall_actors.filter(vehicle.*)pedestriansall_actors.filter(walker.pedestrian.*)traffic_lightsall_actors.filter(traffic.traffic_light.*)# 合并所有需要检测的actorall_detected_actorslist(vehicles)list(pedestrians)list(traffic_lights)# 筛选出50米以内的actorcurrent_locationvehicle.get_location()all_cameras[sforsindisplay_manager.sensor_listifs.sensor_typeRGBCamera]forcamerainall_cameras:nearby_actors[]foractorinall_detected_actors:# 排除自身车辆ifactor.idvehicle.id:continue# 判断是否被遮挡actor_locationactor.get_location()distancecurrent_location.distance(actor_location)# 筛选出50米以内的actorifdistance50:# 射线检测判断是否被遮挡camera_transformcamera.sensor.get_transform()start_locationcamera_transform.locationcamera_transform.get_forward_vector()*0.1# 稍微偏移避免检测到自身end_locationactor_location# 使用CARLA客户端的射线检测APIhit_resultworld.cast_ray(start_location,end_location)hiddenFalseforlabelin([str(x.label)forxinhit_result]):ifBuildinginlabel:hiddenTruebreak# 如果射线未击中任何物体或直接击中目标actor则认为可见ifnothidden:nearby_actors.append(actor)lidar_bounding_boxes[]# 世界坐标系x前y右z上# 标准相机坐标系计算机视觉x右y下z前# 把世界坐标系到标准相机坐标系 还要经过以下变换 xy; y-z zxrotation_correctionnp.asmatrix([[0,1,0,0],[0,0,-1,0],[1,0,0,0],[0,0,0,1]])# actor.get_transform().get_matrix() 获取的是actor局部坐标系到世界坐标系的变换矩阵lidar2world_rtnp.asmatrix(lidar.sensor.get_transform().get_matrix())# actor.get_transform().get_inverse_matrix() 获取的是世界坐标系到actor局部坐标系的变换矩阵world2camera_rtnp.asmatrix(camera.sensor.get_transform().get_inverse_matrix())world2camera_rtrotation_correction world2camera_rt lidar2camera_rtworld2camera_rt lidar2world_rtifcounter0:print(雷达到第一个相机的变换矩阵:\n,lidar2camera_rt)world2lidar_rtnp.asmatrix(lidar.sensor.get_transform().get_inverse_matrix())bounding_boxes[]foractorinnearby_actors:# 也可以直接用actor2camera_rt,这里用最长的路径,经过激光雷达坐标系中转#actor2world_rt np.asmatrix(actor.get_transform().get_matrix())#actor2lidar_rt world2lidar_rt actor2world_rt#actor2camera_rt world2camera_rt actor2world_rt# actor的大小是相对于bounding_box.location的bb_pointscreate_bb_points(actor)ifbb_pointsisnotNone:# 获取bounding_box到actor局部坐标系的变换矩阵bb2actor_rtnp.asmatrix(carla.Transform(actor.bounding_box.location).get_matrix())actor_pointsbb2actor_rt bb_points.T# actor_points 为actor局部坐标系下的点lidar_pointsactor2lidar_rt actor_points# 生成雷达坐标系下的点lidar_bounding_boxes.append({actor_id:actor.id,actor_type:actor.type_id,lidar_points:lidar_points.T})# 生成相机坐标系下的点camera_pointslidar2camera_rt lidar_points# 经过内参变换后,生成图像坐标系下的点bboxnp.transpose(np.dot(camera.sensor.calibration,camera_points[:3,:]))# 归一化坐标除以深度 - 远处的点看起来小近处的点看起来大camera_bboxnp.concatenate([bbox[:,0]/bbox[:,2],bbox[:,1]/bbox[:,2],bbox[:,2]],axis1)bounding_boxes.append(camera_bbox)bounding_boxes[bbforbbinbounding_boxesifall(bb[:,2]0)]# 绘制3d-box(投影到平面后的)camera.draw_bounding_boxes(bounding_boxes)counter1display_manager.render()pygame.display.flip()foreventinpygame.event.get():ifevent.typepygame.QUIT:call_exitTrueelifevent.typepygame.KEYDOWN:ifevent.keyK_ESCAPEorevent.keyK_q:call_exitTruebreakifcall_exit:break# 清理ifdisplay_manager:display_manager.destroy()# 销毁所有actorforactorinactor_list:actor.destroy()client.apply_batch([carla.command.DestroyActor(x)forxinvehicle_list])world.apply_settings(original_settings)五、实际应用场景1. 算法开发与测试感知算法目标检测、语义分割决策规划算法控制算法2. 传感器融合研究摄像头与激光雷达数据融合多传感器标定验证时间同步研究3. 安全性测试极端场景测试恶劣天气、突发状况故障注入测试安全边界测试4. 数据集生成生成带标注的训练数据创建各种场景的数据集数据增强和多样性扩展六、扩展学习1、进阶功能自定义地图制作使用RoadRunner等工具创建自定义场景天气系统控制动态调整天气条件雨、雪、雾、昼夜交通流模拟使用CARLA的交通管理器模拟复杂交通ROS集成将CARLA与ROS/ROS2集成强化学习环境搭建DRL训练环境2、相关工具Autoware.Auto开源自动驾驶栈Apollo百度自动驾驶平台LGSVL Simulator另一款自动驾驶仿真器CARLA Challenge自动驾驶竞赛平台七、总结CARLA为自动驾驶研究提供了一个强大而灵活的平台。通过本DEMO你可以快速搭建CARLA开发环境理解多传感器系统的集成方式掌握坐标系转换的关键技术实现基本的感知和可视化功能八、参考资源core_sensorscore_world
版权声明:本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

建设网站租服务器哪个网站做线上家教比较好

大厂生存启示录:从“螺丝钉”到“金牌个人”的 9 次关键跃迁 *请关注公众号【碳硅化合物AI】 你是否也在大厂的洪流中感到迷茫?每天面对写不完的代码、修不完的 Bug,不仅担心被定义为“工具人”,更害怕自己真的沦为一颗随时可被…

张小明 2026/3/5 6:02:57 网站建设

网站内链优化的角度兰州建设厅网站

游戏与网络连接指南 在当今数字化时代,游戏娱乐和网络连接是计算机使用中非常重要的部分。本文将为大家详细介绍如何在 Windows 8 系统下进行游戏相关操作以及网络连接设置。 一、游戏操作相关 1.1 游戏页面操作 当打开游戏页面时,会显示游戏的相关信息,并提供以下操作选…

张小明 2026/3/5 6:03:01 网站建设

做网站最简单的工具软件开发者路线图

Silverlight 数据绑定中的值转换器与数据模板应用 1. 数据转换概述 在普通绑定中,信息从数据源到目标通常不做任何改变,但这并非总是我们想要的。数据源可能采用底层表示形式,不适合直接在用户界面显示。例如,需要将数字代码替换为可读字符串、对数字进行精简、以长格式显…

张小明 2026/3/5 6:03:02 网站建设

凡科h5登录入口来客seo

LobeChat能否用于编写YAML配置?CI/CD流水线快速搭建 在现代软件开发中,一个新项目从初始化到上线部署的周期被压缩得越来越短。然而,每当开发者面对空荡荡的 .github/workflows/ 目录时,总免不了翻文档、查示例、调试语法错误——…

张小明 2026/3/5 6:03:03 网站建设

十堰做网站排名创建个人网站多少钱

如何用5个步骤实现PyFluent仿真自动化?超实用Python接口指南 【免费下载链接】pyfluent 项目地址: https://gitcode.com/gh_mirrors/pyf/pyfluent PyFluent作为连接Python与Ansys Fluent的桥梁,彻底改变了传统CFD仿真的工作模式。这款开源库让工…

张小明 2026/3/5 6:03:02 网站建设

上海网站公司设计微信公众平台内做网站

还在为Vue项目中的打印功能发愁吗?vue-plugin-hiprint让你彻底告别传统打印的繁琐操作!这款基于jQuery的打印插件完美支持Vue2和Vue3,通过直观的拖拽设计界面,轻松实现从简单标签到复杂报表的全方位打印需求。 【免费下载链接】vu…

张小明 2026/3/5 6:03:10 网站建设