• đź‘‹ Welcome! If you were registered on Cybertruckownersclub.com as of October 14, 2024 or earlier, you can simply login here with the same username and password as on Cybertruckownersclub.

    If you wish, you can remove your account here.

Polestar crash

OP
OP

LoPro

Well-known member
Joined
Jan 1, 2021
Threads
5
Messages
186
Reaction score
84
Location
Norway
Vehicles
Tesla Model 3 DM LR
Country flag
Lidar is also blinded at those times.

I would be Tesla's plan here is to leverage the cameras against each other in their all-in-one composite view for the ai. So the AI can do the human equivalent of squinting out of one eye, but with seven others.

-Crissa
Interesting about lidar. And yes a composite view would be an improvement over our two eyes pretty close together. But I’m pretty sure I haven’t thought this through, but I thought none of the cameras overlap each other and that was one of the reasons we didn’t get a 360° view on the Teslas (thus far) ?
 
Last edited:

Diehard

Well-known member
First Name
D
Joined
Dec 5, 2020
Threads
16
Messages
1,527
Reaction score
310
Location
U.S.A.
Vehicles
Olds Aurora V8, Saturn Sky redline, Lightning, CT2
Country flag
Interesting about lidar. And yes a composite view would be an improvement over our two eyes pretty close together. But I’m pretty sure I haven’t thought this through, but I thought none of the cameras overlap each other and that was one of the reasons we didn’t get a 360° view on the Teslas (thus far) ?
This is an older article:
https://heartbeat.fritz.ai/computer-vision-at-tesla-cd5e88074376

It has a sample image in section 2 That shows two of the front cameras pointing at the same direction. Apparently one is zoom. It makes sense, it helps with predicting what is coming without having to increase resolution. Processing images from these two selectively vs one much higher resolution probably takes less brain power.

I started looking into alternative point of views (bugs) and how their vision works. There is tons of great engineering already out there to copy from. all eight cameras seem to be of the same type. Having different kinds of eyes could give interesting info. The processing power may be a limiting factor.
 
OP
OP

LoPro

Well-known member
Joined
Jan 1, 2021
Threads
5
Messages
186
Reaction score
84
Location
Norway
Vehicles
Tesla Model 3 DM LR
Country flag
This is an older article:
https://heartbeat.fritz.ai/computer-vision-at-tesla-cd5e88074376

It has a sample image in section 2 That shows two of the front cameras pointing at the same direction. Apparently one is zoom. It makes sense, it helps with predicting what is coming without having to increase resolution. Processing images from these two selectively vs one much higher resolution probably takes less brain power.

I started looking into alternative point of views (bugs) and how their vision works. There is tons of great engineering already out there to copy from. all eight cameras seem to be of the same type. Having different kinds of eyes could give interesting info. The processing power may be a limiting factor.
I see from the article what you are talking about now. Of course there’s more than one camera in one direction with different eye settings like zoom etc. Crissa said as much I just didn’t connect. Doesn’t look like there’s any limitation for the mentioned 360 view feature either, but their focus is probably on the more advanced uses.
 

Diehard

Well-known member
First Name
D
Joined
Dec 5, 2020
Threads
16
Messages
1,527
Reaction score
310
Location
U.S.A.
Vehicles
Olds Aurora V8, Saturn Sky redline, Lightning, CT2
Country flag
I see from the article what you are talking about now. Of course there’s more than one camera in one direction with different eye settings like zoom etc. Crissa said as much I just didn’t connect. Doesn’t look like there’s any limitation for the mentioned 360 view feature either, but their focus is probably on the more advanced uses.
I think 360 view for CT is a must. I know drivers with a pulse are becoming irrelevant to Elon but I still matter to me ;)
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
82
Messages
11,802
Reaction score
3,841
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Interesting about lidar. And yes a composite view would be an improvement over our two eyes pretty close together. But I’m pretty sure I haven’t thought this through, but I thought none of the cameras overlap each other and that was one of the reasons we didn’t get a 360° view on the Teslas (thus far) ?
Yes, they have lots of overlapping space. This both lets them knit the video together and get stereoscopic distance information.

Especially the front view, which has the most cameras.

-Crissa
 

MAJMurphy

Member
First Name
MAJM
Joined
Mar 10, 2020
Threads
0
Messages
11
Reaction score
3
Location
Canada
Vehicles
3 motor
Country flag
The eye’s dynamoc range is significantly better than cameras. So the raw data for a human should be better. Then it’s a competiton for processing that data.
 

Diehard

Well-known member
First Name
D
Joined
Dec 5, 2020
Threads
16
Messages
1,527
Reaction score
310
Location
U.S.A.
Vehicles
Olds Aurora V8, Saturn Sky redline, Lightning, CT2
Country flag
The eye’s dynamoc range is significantly better than cameras. So the raw data for a human should be better. Then it’s a competiton for processing that data.
Even human eye uses multiple type of sensors (rods and cones). We can overcome the dynamic range problem to be better than human eye with using multiple cameras with different exposure. As you pointed out how to intelligently process that data within our computational limits is the tricky part.
 
Last edited:

Bond007

Well-known member
First Name
Bond
Joined
Feb 24, 2020
Threads
1
Messages
85
Reaction score
14
Location
WV
Vehicles
Cybertruck
Country flag
There's still the issue of tunnel painted on a wall. I think, radar will be far superior here and will clearly see that there's no road ahead without any further computing necessary.
 

HaulingAss

Well-known member
Joined
Oct 3, 2020
Threads
10
Messages
3,460
Reaction score
669
Location
Washington State
Vehicles
2010 F-150, 2018 Model 3 P, FS DM Cybertruck
Country flag
Even human eye uses multiple type of sensors (rods and cones). We can overcome the dynamic range problem to be better than human eye with using multiple cameras with different exposure. As you pointed out how to intelligently process that data within our computational limits is the tricky part.
It's not even necessary to use multiple cameras to achieve high dynamic range. The same camera can take successive images at different ISO's by adjusting sensor gain instantly between frames. The human eye cannot do that. So the result would be a video of interleaved frames at different ISO's. The images would be processed to create all frames with super high dynamic range.

I tested the autopilot on a curvy country road on a dark night and was super-impressed. When I turned the headlights off it could see the road edges better than I could. The forward cameras can also see better than I can when driving into a blinding sun. I keep my lenses/windshield pretty clean because this is not the case when there is roadgrime in front of the lenses.
 

HaulingAss

Well-known member
Joined
Oct 3, 2020
Threads
10
Messages
3,460
Reaction score
669
Location
Washington State
Vehicles
2010 F-150, 2018 Model 3 P, FS DM Cybertruck
Country flag
Yes, they have lots of overlapping space. This both lets them knit the video together and get stereoscopic distance information.

Especially the front view, which has the most cameras.

-Crissa
Currently, Tesla is creating 3D models of the driving space in real time without using stereo lenses. They use AI which is trained with millions of images that are annotated with distance data derived by radar and LIDAR. After training, the neural net, the system can annotate non-stereo images with distance information with a high degree of accuracy and this is used to construct 3D models of the driving space which are then fed back into a different neural net which outputs the desired trajectory. This is then converted to appropriate control responses.

It's amazing how quickly it all happens but that just shows us what can happen when you have hardware designed for the task and efficient software. It's replicating a human stream of consciousness (but without emotions and the non-driving related elements).
 

Diehard

Well-known member
First Name
D
Joined
Dec 5, 2020
Threads
16
Messages
1,527
Reaction score
310
Location
U.S.A.
Vehicles
Olds Aurora V8, Saturn Sky redline, Lightning, CT2
Country flag
There's still the issue of tunnel painted on a wall. I think, radar will be far superior here and will clearly see that there's no road ahead without any further computing necessary.
I think that case can be resolved without radar with enough light and resolution. The same way we can tell the difference. With a camera on right and one on the left you can see if any pixel inside of the tunnel change position with respect to outside pixels as you get closer. Daylight or vehicle headlight should give enough light to make enough of inside visible. Of course using multiple technologies always provides an edge but when you are in a competitive market, lower cost is often the most significant edge. Besides, if you can make it work without the radar, the applications can go far beyond cars. Drones need to be lighter and could use Less stuff stuffed in them.
 
OP
OP

LoPro

Well-known member
Joined
Jan 1, 2021
Threads
5
Messages
186
Reaction score
84
Location
Norway
Vehicles
Tesla Model 3 DM LR
Country flag
I think that case can be resolved without radar with enough light and resolution. The same way we can tell the difference. With a camera on right and one on the left you can see if any pixel inside of the tunnel change position with respect to outside pixels as you get closer. Daylight or vehicle headlight should give enough light to make enough of inside visible. Of course using multiple technologies always provides an edge but when you are in a competitive market, lower cost is often the most significant edge. Besides, if you can make it work without the radar, the applications can go far beyond cars. Drones need to be lighter and could use Less stuff stuffed in them.
Also the system double checks with the mapping data that there is actually a tunnel there (also why would the road go there if it wasn’t?). If the tunnel is closed for some reason the car can get real-time traffic data and see the warning lights and signs.

It’s the same reason some hooligans can’t just hide lines and draw new white lines off a highway bridge. An alarm would go off.
 
 
Top