![]() It is hard to write algorithms that always work (and don’t crash!) when building on the quicksand of not being able to trust your math and geometry theorems. For instance, one calculation might say a point is inside a given triangle and another mathematically equivalent one may come to the opposite conclusion. Any time one has to calculate the position of new vertices based on geometric and matrix computations over other vertex positions, we get the problem that limited precision arithmetic means that (a) we have to be careful and probably special case things when we get things near but not quite zero - certainly whenever division takes place and (b) things that are true based on pure mathematical and geometric theorems about algorithms when all calculations are exact may become untrue when arithmetic is done and results are stored only approximately. I came upon this question in thinking about Boolean operations on meshes. Question B: places in the code where calculations could or should be done in double precision, or even exact multiprecision rational arithmetic? I’m guessing disadvantage (a) is the biggest show-stopper and is what probably makes this whole idea a non-starter. Maybe the biggest effect is just that more bandwidth between cpu-memory-gpu combinations could hurt performance. Similarly for disk memory.ĭ) Performance penalty? I’m less sure here would be good to see some benchmark comparisons. blend files into new format.Ĭ) Increased memory usage: probably almost doubles the size of needed main memory for large models. Maybe OK for 2.8, but either need massive do_version support or else need to provide a way to convert all existing pre-2.8. One could perhaps shim that for a while with converters to the float-based API, but long term that is an awful solution.ī) Non-backward-compatible change to. likely have dependencies on the existing float interface. And finally, the enormous burden on external users of Blender’s API: all the extant plugins, external renderers, etc. Then the even harder problem of finding all the “epsilon tweaking” parts of the code and trying to reverse engineer why the existing epsilons were chosen and what should change if precision is double. Then harder things like making sure that nobody did arithmetic or union/punning or alignment assumptions that assumed float sizes and alignment. Probably a script can do that for the most part. First, mechanical changes like float -> double, sinf -> sin, glVertex3f -> glVertex3d, 0.0f -> 0.0, etc. Some disadvantages I see (I’m sure there are more: discuss)Ī) Large effort to switch. This incurs costs like: conversion back and forth extra storage to store double versions of our data in order to pass to those packages losing the full advantage of the precision of those calculations by forcing the answers back into floats. As an example, numerous bugs have been reported with the knife tool that eventually turned out to be related to limited precision problems, and got fixed with very careful epsilon tweaking (that is, making comparisons against some small “epsilon” instead of zero, and adjusting the size of epsilon until things “work”).Ĭ) Presumably other 3d content creation packages use doubles (anyone know which?) and there will be a precision loss when exporting from them and importing into Blender - possibly causing geometric anomalies in Blender when none existed in the original.ĭ) We could stop having to use “f” suffixes on all our floating point constants, and stop needing (float) casts when using some standard library routine that only deals with doublesĮ) Some external packages we’ve incorporated into Blender use doubles internally. ![]() With single precision we can only really deal with about 6 decimal orders of magnitude in scale difference, whereas with double precision we could deal with about 15 orders of magnitude.ī) Similar problems that come into play depending on the view plane vs the 3d space scales. There are a few integer versions of routines and a very few double versions of routines.Ī) Should everything change from float to double?ī) If not, are there other places in the code where calculations could or should be done in double precision, or even exact multiprecision rational arithmetic?Ī) Occasionally users complain that for very big scenes, the difference in scale between the smallest size of features and the size of coordinates to position things in that space lead to problems when we only have floating point precision. The BLI_math and BLI_math_… routines are mostly routines that take floating point arguments, do floating point calculations, and return floating point results. Blender uses (single precision) float for coordinates and pretty much everything else that can have fractional values.
0 Comments
Leave a Reply. |