OpenVDB  12.0.0
NanoVDB.h
Go to the documentation of this file.
1 // Copyright Contributors to the OpenVDB Project
2 // SPDX-License-Identifier: Apache-2.0
3 
4 /*!
5  \file nanovdb/NanoVDB.h
6 
7  \author Ken Museth
8 
9  \date January 8, 2020
10 
11  \brief Implements a light-weight self-contained VDB data-structure in a
12  single file! In other words, this is a significantly watered-down
13  version of the OpenVDB implementation, with few dependencies - so
14  a one-stop-shop for a minimalistic VDB data structure that run on
15  most platforms!
16 
17  \note It is important to note that NanoVDB (by design) is a read-only
18  sparse GPU (and CPU) friendly data structure intended for applications
19  like rendering and collision detection. As such it obviously lacks
20  a lot of the functionality and features of OpenVDB grids. NanoVDB
21  is essentially a compact linearized (or serialized) representation of
22  an OpenVDB tree with getValue methods only. For best performance use
23  the ReadAccessor::getValue method as opposed to the Tree::getValue
24  method. Note that since a ReadAccessor caches previous access patterns
25  it is by design not thread-safe, so use one instantiation per thread
26  (it is very light-weight). Also, it is not safe to copy accessors between
27  the GPU and CPU! In fact, client code should only interface
28  with the API of the Grid class (all other nodes of the NanoVDB data
29  structure can safely be ignored by most client codes)!
30 
31 
32  \warning NanoVDB grids can only be constructed via tools like createNanoGrid
33  or the GridBuilder. This explains why none of the grid nodes defined below
34  have public constructors or destructors.
35 
36  \details Please see the following paper for more details on the data structure:
37  K. Museth, “VDB: High-Resolution Sparse Volumes with Dynamic Topology”,
38  ACM Transactions on Graphics 32(3), 2013, which can be found here:
39  http://www.museth.org/Ken/Publications_files/Museth_TOG13.pdf
40 
41  NanoVDB was first published there: https://dl.acm.org/doi/fullHtml/10.1145/3450623.3464653
42 
43 
44  Overview: This file implements the following fundamental class that when combined
45  forms the backbone of the VDB tree data structure:
46 
47  Coord- a signed integer coordinate
48  Vec3 - a 3D vector
49  Vec4 - a 4D vector
50  BBox - a bounding box
51  Mask - a bitmask essential to the non-root tree nodes
52  Map - an affine coordinate transformation
53  Grid - contains a Tree and a map for world<->index transformations. Use
54  this class as the main API with client code!
55  Tree - contains a RootNode and getValue methods that should only be used for debugging
56  RootNode - the top-level node of the VDB data structure
57  InternalNode - the internal nodes of the VDB data structure
58  LeafNode - the lowest level tree nodes that encode voxel values and state
59  ReadAccessor - implements accelerated random access operations
60 
61  Semantics: A VDB data structure encodes values and (binary) states associated with
62  signed integer coordinates. Values encoded at the leaf node level are
63  denoted voxel values, and values associated with other tree nodes are referred
64  to as tile values, which by design cover a larger coordinate index domain.
65 
66 
67  Memory layout:
68 
69  It's important to emphasize that all the grid data (defined below) are explicitly 32 byte
70  aligned, which implies that any memory buffer that contains a NanoVDB grid must also be at
71  32 byte aligned. That is, the memory address of the beginning of a buffer (see ascii diagram below)
72  must be divisible by 32, i.e. uintptr_t(&buffer)%32 == 0! If this is not the case, the C++ standard
73  says the behaviour is undefined! Normally this is not a concerns on GPUs, because they use 256 byte
74  aligned allocations, but the same cannot be said about the CPU.
75 
76  GridData is always at the very beginning of the buffer immediately followed by TreeData!
77  The remaining nodes and blind-data are allowed to be scattered throughout the buffer,
78  though in practice they are arranged as:
79 
80  GridData: 672 bytes (e.g. magic, checksum, major, flags, index, count, size, name, map, world bbox, voxel size, class, type, offset, count)
81 
82  TreeData: 64 bytes (node counts and byte offsets)
83 
84  ... optional padding ...
85 
86  RootData: size depends on ValueType (index bbox, voxel count, tile count, min/max/avg/standard deviation)
87 
88  Array of: RootData::Tile
89 
90  ... optional padding ...
91 
92  Array of: Upper InternalNodes of size 32^3: bbox, two bit masks, 32768 tile values, and min/max/avg/standard deviation values
93 
94  ... optional padding ...
95 
96  Array of: Lower InternalNodes of size 16^3: bbox, two bit masks, 4096 tile values, and min/max/avg/standard deviation values
97 
98  ... optional padding ...
99 
100  Array of: LeafNodes of size 8^3: bbox, bit masks, 512 voxel values, and min/max/avg/standard deviation values
101 
102 
103  Notation: "]---[" implies it has optional padding, and "][" implies zero padding
104 
105  [GridData(672B)][TreeData(64B)]---[RootData][N x Root::Tile]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
106  ^ ^ ^ ^ ^ ^
107  | | | | | |
108  +-- Start of 32B aligned buffer | | | | +-- Node0::DataType* leafData
109  GridType::DataType* gridData | | | |
110  | | | +-- Node1::DataType* lowerData
111  RootType::DataType* rootData --+ | |
112  | +-- Node2::DataType* upperData
113  |
114  +-- RootType::DataType::Tile* tile
115 
116 */
117 
118 #ifndef NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
119 #define NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
120 
121 // The following two header files are the only mandatory dependencies
122 #include <nanovdb/util/Util.h>// for __hostdev__ and lots of other utility functions
123 #include <nanovdb/math/Math.h>// for Coord, BBox, Vec3, Vec4 etc
124 
125 // Do not change this value! 32 byte alignment is fixed in NanoVDB
126 #define NANOVDB_DATA_ALIGNMENT 32
127 
128 // NANOVDB_MAGIC_NUMB is currently used for both grids and files (starting with v32.6.0)
129 // NANOVDB_MAGIC_GRID will soon be used exclusively for grids (serialized to a single buffer)
130 // NANOVDB_MAGIC_FILE will soon be used exclusively for files
131 // NANOVDB_MAGIC_NODE will soon be used exclusively for NodeManager
132 // NANOVDB_MAGIC_FRAG will soon be used exclusively for a fragmented grid, i.e. a grid that is not serialized
133 // | : 0 in 30 corresponds to 0 in NanoVDB0
134 #define NANOVDB_MAGIC_NUMB 0x304244566f6e614eUL // "NanoVDB0" in hex - little endian (uint64_t)
135 #define NANOVDB_MAGIC_GRID 0x314244566f6e614eUL // "NanoVDB1" in hex - little endian (uint64_t)
136 #define NANOVDB_MAGIC_FILE 0x324244566f6e614eUL // "NanoVDB2" in hex - little endian (uint64_t)
137 #define NANOVDB_MAGIC_NODE 0x334244566f6e614eUL // "NanoVDB3" in hex - little endian (uint64_t)
138 #define NANOVDB_MAGIC_FRAG 0x344244566f6e614eUL // "NanoVDB4" in hex - little endian (uint64_t)
139 #define NANOVDB_MAGIC_MASK 0x00FFFFFFFFFFFFFFUL // use this mask to remove the number
140 
141 //#define NANOVDB_MAGIC_NUMBER 0x304244566f6e614eUL
142 //#define NANOVDB_USE_NEW_MAGIC_NUMBERS// used to enable use of the new magic numbers described above
143 
144 #define NANOVDB_MAJOR_VERSION_NUMBER 32 // reflects changes to the ABI and hence also the file format
145 #define NANOVDB_MINOR_VERSION_NUMBER 7 // reflects changes to the API but not ABI
146 #define NANOVDB_PATCH_VERSION_NUMBER 0 // reflects changes that does not affect the ABI or API
147 
148 #define TBB_SUPPRESS_DEPRECATED_MESSAGES 1
149 
150 // This replaces a Coord key at the root level with a single uint64_t
151 #define NANOVDB_USE_SINGLE_ROOT_KEY
152 
153 // This replaces three levels of Coord keys in the ReadAccessor with one Coord
154 //#define NANOVDB_USE_SINGLE_ACCESSOR_KEY
155 
156 // Use this to switch between std::ofstream or FILE implementations
157 //#define NANOVDB_USE_IOSTREAMS
158 
159 // Use this to switch between old and new accessor methods
160 #define NANOVDB_NEW_ACCESSOR_METHODS
161 
162 #define NANOVDB_FPN_BRANCHLESS
163 
164 #if !defined(NANOVDB_ALIGN)
165 #define NANOVDB_ALIGN(n) alignas(n)
166 #endif // !defined(NANOVDB_ALIGN)
167 
168 namespace nanovdb {// =================================================================
169 
170 // --------------------------> Build types <------------------------------------
171 
172 /// @brief Dummy type for a voxel whose value equals an offset into an external value array
173 class ValueIndex{};
174 
175 /// @brief Dummy type for a voxel whose value equals an offset into an external value array of active values
176 class ValueOnIndex{};
177 
178 /// @brief Like @c ValueIndex but with a mutable mask
180 
181 /// @brief Like @c ValueOnIndex but with a mutable mask
183 
184 /// @brief Dummy type for a voxel whose value equals its binary active state
185 class ValueMask{};
186 
187 /// @brief Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
188 class Half{};
189 
190 /// @brief Dummy type for a 4bit quantization of float point values
191 class Fp4{};
192 
193 /// @brief Dummy type for a 8bit quantization of float point values
194 class Fp8{};
195 
196 /// @brief Dummy type for a 16bit quantization of float point values
197 class Fp16{};
198 
199 /// @brief Dummy type for a variable bit quantization of floating point values
200 class FpN{};
201 
202 /// @brief Dummy type for indexing points into voxels
203 class Point{};
204 
205 // --------------------------> GridType <------------------------------------
206 
207 /// @brief return the number of characters (including null termination) required to convert enum type to a string
208 template <class EnumT>
209 __hostdev__ inline constexpr uint32_t strlen(){return (uint32_t)EnumT::StrLen - (uint32_t)EnumT::End;}
210 
211 /// @brief List of types that are currently supported by NanoVDB
212 ///
213 /// @note To expand on this list do:
214 /// 1) Add the new type between Unknown and End in the enum below
215 /// 2) Add the new type to OpenToNanoVDB::processGrid that maps OpenVDB types to GridType
216 /// 3) Verify that the ConvertTrait in NanoToOpenVDB.h works correctly with the new type
217 /// 4) Add the new type to toGridType (defined below) that maps NanoVDB types to GridType
218 /// 5) Add the new type to toStr (defined below)
219 enum class GridType : uint32_t { Unknown = 0, // unknown value type - should rarely be used
220  Float = 1, // single precision floating point value
221  Double = 2, // double precision floating point value
222  Int16 = 3, // half precision signed integer value
223  Int32 = 4, // single precision signed integer value
224  Int64 = 5, // double precision signed integer value
225  Vec3f = 6, // single precision floating 3D vector
226  Vec3d = 7, // double precision floating 3D vector
227  Mask = 8, // no value, just the active state
228  Half = 9, // half precision floating point value (placeholder for IEEE 754 Half)
229  UInt32 = 10, // single precision unsigned integer value
230  Boolean = 11, // boolean value, encoded in bit array
231  RGBA8 = 12, // RGBA packed into 32bit word in reverse-order, i.e. R is lowest byte.
232  Fp4 = 13, // 4bit quantization of floating point value
233  Fp8 = 14, // 8bit quantization of floating point value
234  Fp16 = 15, // 16bit quantization of floating point value
235  FpN = 16, // variable bit quantization of floating point value
236  Vec4f = 17, // single precision floating 4D vector
237  Vec4d = 18, // double precision floating 4D vector
238  Index = 19, // index into an external array of active and inactive values
239  OnIndex = 20, // index into an external array of active values
240  IndexMask = 21, // like Index but with a mutable mask
241  OnIndexMask = 22, // like OnIndex but with a mutable mask
242  PointIndex = 23, // voxels encode indices to co-located points
243  Vec3u8 = 24, // 8bit quantization of floating point 3D vector (only as blind data)
244  Vec3u16 = 25, // 16bit quantization of floating point 3D vector (only as blind data)
245  UInt8 = 26, // 8 bit unsigned integer values (eg 0 -> 255 gray scale)
246  End = 27,// total number of types in this enum (excluding StrLen since it's not a type)
247  StrLen = End + 12};// this entry is used to determine the minimum size of c-string
248 
249 /// @brief Maps a GridType to a c-string
250 /// @param dst destination string of size 12 or larger
251 /// @param gridType GridType enum to be mapped to a string
252 /// @return Retuns a c-string used to describe a GridType
253 __hostdev__ inline char* toStr(char *dst, GridType gridType)
254 {
255  switch (gridType){
256  case GridType::Unknown: return util::strcpy(dst, "?");
257  case GridType::Float: return util::strcpy(dst, "float");
258  case GridType::Double: return util::strcpy(dst, "double");
259  case GridType::Int16: return util::strcpy(dst, "int16");
260  case GridType::Int32: return util::strcpy(dst, "int32");
261  case GridType::Int64: return util::strcpy(dst, "int64");
262  case GridType::Vec3f: return util::strcpy(dst, "Vec3f");
263  case GridType::Vec3d: return util::strcpy(dst, "Vec3d");
264  case GridType::Mask: return util::strcpy(dst, "Mask");
265  case GridType::Half: return util::strcpy(dst, "Half");
266  case GridType::UInt32: return util::strcpy(dst, "uint32");
267  case GridType::Boolean: return util::strcpy(dst, "bool");
268  case GridType::RGBA8: return util::strcpy(dst, "RGBA8");
269  case GridType::Fp4: return util::strcpy(dst, "Float4");
270  case GridType::Fp8: return util::strcpy(dst, "Float8");
271  case GridType::Fp16: return util::strcpy(dst, "Float16");
272  case GridType::FpN: return util::strcpy(dst, "FloatN");
273  case GridType::Vec4f: return util::strcpy(dst, "Vec4f");
274  case GridType::Vec4d: return util::strcpy(dst, "Vec4d");
275  case GridType::Index: return util::strcpy(dst, "Index");
276  case GridType::OnIndex: return util::strcpy(dst, "OnIndex");
277  case GridType::IndexMask: return util::strcpy(dst, "IndexMask");
278  case GridType::OnIndexMask: return util::strcpy(dst, "OnIndexMask");
279  case GridType::PointIndex: return util::strcpy(dst, "PointIndex");
280  case GridType::Vec3u8: return util::strcpy(dst, "Vec3u8");
281  case GridType::Vec3u16: return util::strcpy(dst, "Vec3u16");
282  case GridType::UInt8: return util::strcpy(dst, "uint8");
283  default: return util::strcpy(dst, "End");
284  }
285 }
286 
287 // --------------------------> GridClass <------------------------------------
288 
289 /// @brief Classes (superset of OpenVDB) that are currently supported by NanoVDB
290 enum class GridClass : uint32_t { Unknown = 0,
291  LevelSet = 1, // narrow band level set, e.g. SDF
292  FogVolume = 2, // fog volume, e.g. density
293  Staggered = 3, // staggered MAC grid, e.g. velocity
294  PointIndex = 4, // point index grid
295  PointData = 5, // point data grid
296  Topology = 6, // grid with active states only (no values)
297  VoxelVolume = 7, // volume of geometric cubes, e.g. colors cubes in Minecraft
298  IndexGrid = 8, // grid whose values are offsets, e.g. into an external array
299  TensorGrid = 9, // Index grid for indexing learnable tensor features
300  End = 10,// total number of types in this enum (excluding StrLen since it's not a type)
301  StrLen = End + 7};// this entry is used to determine the minimum size of c-string
302 
303 
304 /// @brief Retuns a c-string used to describe a GridClass
305 /// @param dst destination string of size 7 or larger
306 /// @param gridClass GridClass enum to be converted to a string
307 __hostdev__ inline char* toStr(char *dst, GridClass gridClass)
308 {
309  switch (gridClass){
310  case GridClass::Unknown: return util::strcpy(dst, "?");
311  case GridClass::LevelSet: return util::strcpy(dst, "SDF");
312  case GridClass::FogVolume: return util::strcpy(dst, "FOG");
313  case GridClass::Staggered: return util::strcpy(dst, "MAC");
314  case GridClass::PointIndex: return util::strcpy(dst, "PNTIDX");
315  case GridClass::PointData: return util::strcpy(dst, "PNTDAT");
316  case GridClass::Topology: return util::strcpy(dst, "TOPO");
317  case GridClass::VoxelVolume: return util::strcpy(dst, "VOX");
318  case GridClass::IndexGrid: return util::strcpy(dst, "INDEX");
319  case GridClass::TensorGrid: return util::strcpy(dst, "TENSOR");
320  default: return util::strcpy(dst, "END");
321  }
322 }
323 
324 // --------------------------> GridFlags <------------------------------------
325 
326 /// @brief Grid flags which indicate what extra information is present in the grid buffer.
327 enum class GridFlags : uint32_t {
328  HasLongGridName = 1 << 0, // grid name is longer than 256 characters
329  HasBBox = 1 << 1, // nodes contain bounding-boxes of active values
330  HasMinMax = 1 << 2, // nodes contain min/max of active values
331  HasAverage = 1 << 3, // nodes contain averages of active values
332  HasStdDeviation = 1 << 4, // nodes contain standard deviations of active values
333  IsBreadthFirst = 1 << 5, // nodes are typically arranged breadth-first in memory
334  End = 1 << 6, // use End - 1 as a mask for the 5 lower bit flags
335  StrLen = End + 23,// this entry is used to determine the minimum size of c-string
336 };
337 
338 /// @brief Retuns a c-string used to describe a GridFlags
339 /// @param dst destination string of size 23 or larger
340 /// @param gridFlags GridFlags enum to be converted to a string
341 __hostdev__ inline const char* toStr(char *dst, GridFlags gridFlags)
342 {
343  switch (gridFlags){
344  case GridFlags::HasLongGridName: return util::strcpy(dst, "has long grid name");
345  case GridFlags::HasBBox: return util::strcpy(dst, "has bbox");
346  case GridFlags::HasMinMax: return util::strcpy(dst, "has min/max");
347  case GridFlags::HasAverage: return util::strcpy(dst, "has average");
348  case GridFlags::HasStdDeviation: return util::strcpy(dst, "has standard deviation");
349  case GridFlags::IsBreadthFirst: return util::strcpy(dst, "is breadth-first");
350  default: return util::strcpy(dst, "end");
351  }
352 }
353 
354 // --------------------------> MagicType <------------------------------------
355 
356 /// @brief Enums used to identify magic numbers recognized by NanoVDB
357 enum class MagicType : uint32_t { Unknown = 0,// first 64 bits are neither of the cases below
358  OpenVDB = 1,// first 32 bits = 0x56444220UL
359  NanoVDB = 2,// first 64 bits = NANOVDB_MAGIC_NUMB
360  NanoGrid = 3,// first 64 bits = NANOVDB_MAGIC_GRID
361  NanoFile = 4,// first 64 bits = NANOVDB_MAGIC_FILE
362  NanoNode = 5,// first 64 bits = NANOVDB_MAGIC_NODE
363  NanoFrag = 6,// first 64 bits = NANOVDB_MAGIC_FRAG
364  End = 7,
365  StrLen = End + 25};// this entry is used to determine the minimum size of c-string
366 
367 /// @brief maps 64 bits of magic number to enum
368 __hostdev__ inline MagicType toMagic(uint64_t magic)
369 {
370  switch (magic){
376  default: return (magic & ~uint32_t(0)) == 0x56444220UL ? MagicType::OpenVDB : MagicType::Unknown;
377  }
378 }
379 
380 /// @brief print 64-bit magic number to string
381 /// @param dst destination string of size 25 or larger
382 /// @param magic 64 bit magic number to be printed
383 /// @return return destination string @c dst
384 __hostdev__ inline char* toStr(char *dst, MagicType magic)
385 {
386  switch (magic){
387  case MagicType::Unknown: return util::strcpy(dst, "unknown");
388  case MagicType::NanoVDB: return util::strcpy(dst, "nanovdb");
389  case MagicType::NanoGrid: return util::strcpy(dst, "nanovdb::Grid");
390  case MagicType::NanoFile: return util::strcpy(dst, "nanovdb::File");
391  case MagicType::NanoNode: return util::strcpy(dst, "nanovdb::NodeManager");
392  case MagicType::NanoFrag: return util::strcpy(dst, "fragmented nanovdb::Grid");
393  case MagicType::OpenVDB: return util::strcpy(dst, "openvdb");
394  default: return util::strcpy(dst, "end");
395  }
396 }
397 
398 // --------------------------> PointType enums <------------------------------------
399 
400 // Define the type used when the points are encoded as blind data in the output grid
401 enum class PointType : uint32_t { Disable = 0,// no point information e.g. when BuildT != Point
402  PointID = 1,// linear index of type uint32_t to points
403  World64 = 2,// Vec3d in world space
404  World32 = 3,// Vec3f in world space
405  Grid64 = 4,// Vec3d in grid space
406  Grid32 = 5,// Vec3f in grid space
407  Voxel32 = 6,// Vec3f in voxel space
408  Voxel16 = 7,// Vec3u16 in voxel space
409  Voxel8 = 8,// Vec3u8 in voxel space
410  Default = 9,// output matches input, i.e. Vec3d or Vec3f in world space
411  End =10 };
412 
413 // --------------------------> GridBlindData enums <------------------------------------
414 
415 /// @brief Blind-data Classes that are currently supported by NanoVDB
416 enum class GridBlindDataClass : uint32_t { Unknown = 0,
417  IndexArray = 1,
418  AttributeArray = 2,
419  GridName = 3,
420  ChannelArray = 4,
421  End = 5 };
422 
423 /// @brief Blind-data Semantics that are currently understood by NanoVDB
424 enum class GridBlindDataSemantic : uint32_t { Unknown = 0,
425  PointPosition = 1, // 3D coordinates in an unknown space
426  PointColor = 2,
427  PointNormal = 3,
428  PointRadius = 4,
429  PointVelocity = 5,
430  PointId = 6,
431  WorldCoords = 7, // 3D coordinates in world space, e.g. (0.056, 0.8, 1,8)
432  GridCoords = 8, // 3D coordinates in grid space, e.g. (1.2, 4.0, 5.7), aka index-space
433  VoxelCoords = 9, // 3D coordinates in voxel space, e.g. (0.2, 0.0, 0.7)
434  End = 10 };
435 
436 // --------------------------> BuildTraits <------------------------------------
437 
438 /// @brief Define static boolean tests for template build types
439 template<typename T>
441 {
442  // check if T is an index type
445  static constexpr bool is_offindex = util::is_same<T, ValueIndex, ValueIndexMask>::value;
446  static constexpr bool is_indexmask = util::is_same<T, ValueIndexMask, ValueOnIndexMask>::value;
447  // check if T is a compressed float type with fixed bit precision
448  static constexpr bool is_FpX = util::is_same<T, Fp4, Fp8, Fp16>::value;
449  // check if T is a compressed float type with fixed or variable bit precision
450  static constexpr bool is_Fp = util::is_same<T, Fp4, Fp8, Fp16, FpN>::value;
451  // check if T is a POD float type, i.e float or double
452  static constexpr bool is_float = util::is_floating_point<T>::value;
453  // check if T is a template specialization of LeafData<T>, i.e. has T mValues[512]
454  static constexpr bool is_special = is_index || is_Fp || util::is_same<T, Point, bool, ValueMask>::value;
455 }; // BuildTraits
456 
457 // --------------------------> BuildToValueMap <------------------------------------
458 
459 /// @brief Maps one type (e.g. the build types above) to other (actual) types
460 template<typename T>
462 {
463  using Type = T;
464  using type = T;
465 };
466 
467 template<>
469 {
470  using Type = uint64_t;
471  using type = uint64_t;
472 };
473 
474 template<>
476 {
477  using Type = uint64_t;
478  using type = uint64_t;
479 };
480 
481 template<>
483 {
484  using Type = uint64_t;
485  using type = uint64_t;
486 };
487 
488 template<>
490 {
491  using Type = uint64_t;
492  using type = uint64_t;
493 };
494 
495 template<>
497 {
498  using Type = bool;
499  using type = bool;
500 };
501 
502 template<>
504 {
505  using Type = float;
506  using type = float;
507 };
508 
509 template<>
511 {
512  using Type = float;
513  using type = float;
514 };
515 
516 template<>
518 {
519  using Type = float;
520  using type = float;
521 };
522 
523 template<>
525 {
526  using Type = float;
527  using type = float;
528 };
529 
530 template<>
532 {
533  using Type = float;
534  using type = float;
535 };
536 
537 template<>
539 {
540  using Type = uint64_t;
541  using type = uint64_t;
542 };
543 
544 // --------------------------> utility functions related to alignment <------------------------------------
545 
546 /// @brief return true if the specified pointer is 32 byte aligned
547 __hostdev__ inline static bool isAligned(const void* p){return uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;}
548 
549 /// @brief return the smallest number of bytes that when added to the specified pointer results in a 32 byte aligned pointer.
550 __hostdev__ inline static uint64_t alignmentPadding(const void* p)
551 {
552  NANOVDB_ASSERT(p);
554 }
555 
556 /// @brief offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers.
557 template <typename T>
558 __hostdev__ inline static T* alignPtr(T* p){return util::PtrAdd<T>(p, alignmentPadding(p));}
559 
560 // --------------------------> isFloatingPoint(GridType) <------------------------------------
561 
562 /// @brief return true if the GridType maps to a floating point type
563 __hostdev__ inline bool isFloatingPoint(GridType gridType)
564 {
565  return gridType == GridType::Float ||
566  gridType == GridType::Double ||
567  gridType == GridType::Half ||
568  gridType == GridType::Fp4 ||
569  gridType == GridType::Fp8 ||
570  gridType == GridType::Fp16 ||
571  gridType == GridType::FpN;
572 }
573 
574 // --------------------------> isFloatingPointVector(GridType) <------------------------------------
575 
576 /// @brief return true if the GridType maps to a floating point vec3.
578 {
579  return gridType == GridType::Vec3f ||
580  gridType == GridType::Vec3d ||
581  gridType == GridType::Vec4f ||
582  gridType == GridType::Vec4d;
583 }
584 
585 // --------------------------> isInteger(GridType) <------------------------------------
586 
587 /// @brief Return true if the GridType maps to a POD integer type.
588 /// @details These types are used to associate a voxel with a POD integer type
589 __hostdev__ inline bool isInteger(GridType gridType)
590 {
591  return gridType == GridType::Int16 ||
592  gridType == GridType::Int32 ||
593  gridType == GridType::Int64 ||
594  gridType == GridType::UInt32||
595  gridType == GridType::UInt8;
596 }
597 
598 // --------------------------> isIndex(GridType) <------------------------------------
599 
600 /// @brief Return true if the GridType maps to a special index type (not a POD integer type).
601 /// @details These types are used to index from a voxel into an external array of values, e.g. sidecar or blind data.
602 __hostdev__ inline bool isIndex(GridType gridType)
603 {
604  return gridType == GridType::Index ||// index both active and inactive values
605  gridType == GridType::OnIndex ||// index active values only
606  gridType == GridType::IndexMask ||// as Index, but with an additional mask
607  gridType == GridType::OnIndexMask;// as OnIndex, but with an additional mask
608 }
609 
610 // --------------------------> isValue(GridType, GridClass) <------------------------------------
611 
612 /// @brief return true if the combination of GridType and GridClass is valid.
613 __hostdev__ inline bool isValid(GridType gridType, GridClass gridClass)
614 {
615  if (gridClass == GridClass::LevelSet || gridClass == GridClass::FogVolume) {
616  return isFloatingPoint(gridType);
617  } else if (gridClass == GridClass::Staggered) {
618  return isFloatingPointVector(gridType);
619  } else if (gridClass == GridClass::PointIndex || gridClass == GridClass::PointData) {
620  return gridType == GridType::PointIndex || gridType == GridType::UInt32;
621  } else if (gridClass == GridClass::Topology) {
622  return gridType == GridType::Mask;
623  } else if (gridClass == GridClass::IndexGrid) {
624  return isIndex(gridType);
625  } else if (gridClass == GridClass::VoxelVolume) {
626  return gridType == GridType::RGBA8 || gridType == GridType::Float ||
627  gridType == GridType::Double || gridType == GridType::Vec3f ||
628  gridType == GridType::Vec3d || gridType == GridType::UInt32 ||
629  gridType == GridType::UInt8;
630  }
631  return gridClass < GridClass::End && gridType < GridType::End; // any valid combination
632 }
633 
634 // --------------------------> validation of blind data meta data <------------------------------------
635 
636 /// @brief return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid.
637 __hostdev__ inline bool isValid(const GridBlindDataClass& blindClass,
638  const GridBlindDataSemantic& blindSemantics,
639  const GridType& blindType)
640 {
641  bool test = false;
642  switch (blindClass) {
644  test = (blindSemantics == GridBlindDataSemantic::Unknown ||
645  blindSemantics == GridBlindDataSemantic::PointId) &&
646  isInteger(blindType);
647  break;
649  if (blindSemantics == GridBlindDataSemantic::PointPosition ||
650  blindSemantics == GridBlindDataSemantic::WorldCoords) {
651  test = blindType == GridType::Vec3f || blindType == GridType::Vec3d;
652  } else if (blindSemantics == GridBlindDataSemantic::GridCoords) {
653  test = blindType == GridType::Vec3f;
654  } else if (blindSemantics == GridBlindDataSemantic::VoxelCoords) {
655  test = blindType == GridType::Vec3f || blindType == GridType::Vec3u8 || blindType == GridType::Vec3u16;
656  } else {
657  test = blindSemantics != GridBlindDataSemantic::PointId;
658  }
659  break;
661  test = blindSemantics == GridBlindDataSemantic::Unknown && blindType == GridType::Unknown;
662  break;
663  default: // captures blindClass == Unknown and ChannelArray
664  test = blindClass < GridBlindDataClass::End &&
665  blindSemantics < GridBlindDataSemantic::End &&
666  blindType < GridType::End; // any valid combination
667  break;
668  }
669  //if (!test) printf("Invalid combination: GridBlindDataClass=%u, GridBlindDataSemantic=%u, GridType=%u\n",(uint32_t)blindClass, (uint32_t)blindSemantics, (uint32_t)blindType);
670  return test;
671 }
672 
673 // ----------------------------> Version class <-------------------------------------
674 
675 /// @brief Bit-compacted representation of all three version numbers
676 ///
677 /// @details major is the top 11 bits, minor is the 11 middle bits and patch is the lower 10 bits
678 class Version
679 {
680  uint32_t mData; // 11 + 11 + 10 bit packing of major + minor + patch
681 public:
682  static constexpr uint32_t End = 0, StrLen = 8;// for strlen<Version>()
683  /// @brief Default constructor
685  : mData(uint32_t(NANOVDB_MAJOR_VERSION_NUMBER) << 21 |
686  uint32_t(NANOVDB_MINOR_VERSION_NUMBER) << 10 |
688  {
689  }
690  /// @brief Constructor from a raw uint32_t data representation
691  __hostdev__ Version(uint32_t data) : mData(data) {}
692  /// @brief Constructor from major.minor.patch version numbers
693  __hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
694  : mData(major << 21 | minor << 10 | patch)
695  {
696  NANOVDB_ASSERT(major < (1u << 11)); // max value of major is 2047
697  NANOVDB_ASSERT(minor < (1u << 11)); // max value of minor is 2047
698  NANOVDB_ASSERT(patch < (1u << 10)); // max value of patch is 1023
699  }
700  __hostdev__ bool operator==(const Version& rhs) const { return mData == rhs.mData; }
701  __hostdev__ bool operator<( const Version& rhs) const { return mData < rhs.mData; }
702  __hostdev__ bool operator<=(const Version& rhs) const { return mData <= rhs.mData; }
703  __hostdev__ bool operator>( const Version& rhs) const { return mData > rhs.mData; }
704  __hostdev__ bool operator>=(const Version& rhs) const { return mData >= rhs.mData; }
705  __hostdev__ uint32_t id() const { return mData; }
706  __hostdev__ uint32_t getMajor() const { return (mData >> 21) & ((1u << 11) - 1); }
707  __hostdev__ uint32_t getMinor() const { return (mData >> 10) & ((1u << 11) - 1); }
708  __hostdev__ uint32_t getPatch() const { return mData & ((1u << 10) - 1); }
709  __hostdev__ bool isCompatible() const { return this->getMajor() == uint32_t(NANOVDB_MAJOR_VERSION_NUMBER); }
710  /// @brief Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER
711  /// @return return 0 if the major version equals NANOVDB_MAJOR_VERSION_NUMBER, else a negative age if this
712  /// instance has a smaller major verion (is older), and a positive age if it is newer, i.e. larger.
713  __hostdev__ int age() const {return int(this->getMajor()) - int(NANOVDB_MAJOR_VERSION_NUMBER);}
714 }; // Version
715 
716 /// @brief print the verion number to a c-string
717 /// @param dst destination string of size 8 or more
718 /// @param v version to be printed
719 /// @return returns destination string @c dst
720 __hostdev__ inline char* toStr(char *dst, const Version &v)
721 {
722  return util::sprint(dst, v.getMajor(), ".",v.getMinor(), ".",v.getPatch());
723 }
724 
725 // ----------------------------> TensorTraits <--------------------------------------
726 
727 template<typename T, int Rank = (util::is_specialization<T, math::Vec3>::value || util::is_specialization<T, math::Vec4>::value || util::is_same<T, math::Rgba8>::value) ? 1 : 0>
729 
730 template<typename T>
731 struct TensorTraits<T, 0>
732 {
733  static const int Rank = 0; // i.e. scalar
734  static const bool IsScalar = true;
735  static const bool IsVector = false;
736  static const int Size = 1;
737  using ElementType = T;
738  static T scalar(const T& s) { return s; }
739 };
740 
741 template<typename T>
742 struct TensorTraits<T, 1>
743 {
744  static const int Rank = 1; // i.e. vector
745  static const bool IsScalar = false;
746  static const bool IsVector = true;
747  static const int Size = T::SIZE;
748  using ElementType = typename T::ValueType;
749  static ElementType scalar(const T& v) { return v.length(); }
750 };
751 
752 // ----------------------------> FloatTraits <--------------------------------------
753 
754 template<typename T, int = sizeof(typename TensorTraits<T>::ElementType)>
756 {
757  using FloatType = float;
758 };
759 
760 template<typename T>
761 struct FloatTraits<T, 8>
762 {
763  using FloatType = double;
764 };
765 
766 template<>
767 struct FloatTraits<bool, 1>
768 {
769  using FloatType = bool;
770 };
771 
772 template<>
773 struct FloatTraits<ValueIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
774 {
775  using FloatType = uint64_t;
776 };
777 
778 template<>
779 struct FloatTraits<ValueIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
780 {
781  using FloatType = uint64_t;
782 };
783 
784 template<>
785 struct FloatTraits<ValueOnIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
786 {
787  using FloatType = uint64_t;
788 };
789 
790 template<>
791 struct FloatTraits<ValueOnIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
792 {
793  using FloatType = uint64_t;
794 };
795 
796 template<>
797 struct FloatTraits<ValueMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
798 {
799  using FloatType = bool;
800 };
801 
802 template<>
803 struct FloatTraits<Point, 1> // size of empty class in C++ is 1 byte and not 0 byte
804 {
805  using FloatType = double;
806 };
807 
808 // ----------------------------> mapping BuildType -> GridType <--------------------------------------
809 
810 /// @brief Maps from a templated build type to a GridType enum
811 template<typename BuildT>
813 {
814  if constexpr(util::is_same<BuildT, float>::value) { // resolved at compile-time
815  return GridType::Float;
816  } else if constexpr(util::is_same<BuildT, double>::value) {
817  return GridType::Double;
818  } else if constexpr(util::is_same<BuildT, int16_t>::value) {
819  return GridType::Int16;
820  } else if constexpr(util::is_same<BuildT, int32_t>::value) {
821  return GridType::Int32;
822  } else if constexpr(util::is_same<BuildT, int64_t>::value) {
823  return GridType::Int64;
824  } else if constexpr(util::is_same<BuildT, Vec3f>::value) {
825  return GridType::Vec3f;
826  } else if constexpr(util::is_same<BuildT, Vec3d>::value) {
827  return GridType::Vec3d;
828  } else if constexpr(util::is_same<BuildT, uint32_t>::value) {
829  return GridType::UInt32;
830  } else if constexpr(util::is_same<BuildT, ValueMask>::value) {
831  return GridType::Mask;
832  } else if constexpr(util::is_same<BuildT, Half>::value) {
833  return GridType::Half;
834  } else if constexpr(util::is_same<BuildT, ValueIndex>::value) {
835  return GridType::Index;
836  } else if constexpr(util::is_same<BuildT, ValueOnIndex>::value) {
837  return GridType::OnIndex;
838  } else if constexpr(util::is_same<BuildT, ValueIndexMask>::value) {
839  return GridType::IndexMask;
841  return GridType::OnIndexMask;
842  } else if constexpr(util::is_same<BuildT, bool>::value) {
843  return GridType::Boolean;
844  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
845  return GridType::RGBA8;
846  } else if constexpr(util::is_same<BuildT, Fp4>::value) {
847  return GridType::Fp4;
848  } else if constexpr(util::is_same<BuildT, Fp8>::value) {
849  return GridType::Fp8;
850  } else if constexpr(util::is_same<BuildT, Fp16>::value) {
851  return GridType::Fp16;
852  } else if constexpr(util::is_same<BuildT, FpN>::value) {
853  return GridType::FpN;
854  } else if constexpr(util::is_same<BuildT, Vec4f>::value) {
855  return GridType::Vec4f;
856  } else if constexpr(util::is_same<BuildT, Vec4d>::value) {
857  return GridType::Vec4d;
858  } else if constexpr(util::is_same<BuildT, Point>::value) {
859  return GridType::PointIndex;
860  } else if constexpr(util::is_same<BuildT, Vec3u8>::value) {
861  return GridType::Vec3u8;
862  } else if constexpr(util::is_same<BuildT, Vec3u16>::value) {
863  return GridType::Vec3u16;
864  } else if constexpr(util::is_same<BuildT, uint8_t>::value) {
865  return GridType::UInt8;
866  }
867  return GridType::Unknown;
868 }// toGridType
869 
870 template<typename BuildT>
871 [[deprecated("Use toGridType<T>() instead.")]]
872 __hostdev__ inline GridType mapToGridType(){return toGridType<BuildT>();}
873 
874 // ----------------------------> mapping BuildType -> GridClass <--------------------------------------
875 
876 /// @brief Maps from a templated build type to a GridClass enum
877 template<typename BuildT>
879 {
881  return GridClass::Topology;
882  } else if constexpr(BuildTraits<BuildT>::is_index) {
883  return GridClass::IndexGrid;
884  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
885  return GridClass::VoxelVolume;
886  } else if constexpr(util::is_same<BuildT, Point>::value) {
887  return GridClass::PointIndex;
888  }
889  return defaultClass;
890 }
891 
892 template<typename BuildT>
893 [[deprecated("Use toGridClass<T>() instead.")]]
895 {
896  return toGridClass<BuildT>();
897 }
898 
899 // ----------------------------> BitFlags <--------------------------------------
900 
901 template<int N>
902 struct BitArray;
903 template<>
904 struct BitArray<8>
905 {
906  uint8_t mFlags{0};
907 };
908 template<>
909 struct BitArray<16>
910 {
911  uint16_t mFlags{0};
912 };
913 template<>
914 struct BitArray<32>
915 {
916  uint32_t mFlags{0};
917 };
918 template<>
919 struct BitArray<64>
920 {
921  uint64_t mFlags{0};
922 };
923 
924 template<int N>
925 class BitFlags : public BitArray<N>
926 {
927 protected:
928  using BitArray<N>::mFlags;
929 
930 public:
931  using Type = decltype(mFlags);
932  BitFlags() {}
933  BitFlags(Type mask) : BitArray<N>{mask} {}
934  BitFlags(std::initializer_list<uint8_t> list)
935  {
936  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
937  }
938  template<typename MaskT>
939  BitFlags(std::initializer_list<MaskT> list)
940  {
941  for (auto mask : list) mFlags |= static_cast<Type>(mask);
942  }
943  __hostdev__ Type data() const { return mFlags; }
944  __hostdev__ Type& data() { return mFlags; }
945  __hostdev__ void initBit(std::initializer_list<uint8_t> list)
946  {
947  mFlags = 0u;
948  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
949  }
950  template<typename MaskT>
951  __hostdev__ void initMask(std::initializer_list<MaskT> list)
952  {
953  mFlags = 0u;
954  for (auto mask : list) mFlags |= static_cast<Type>(mask);
955  }
956  //__hostdev__ Type& data() { return mFlags; }
957  //__hostdev__ Type data() const { return mFlags; }
958  __hostdev__ Type getFlags() const { return mFlags & (static_cast<Type>(GridFlags::End) - 1u); } // mask out everything except relevant bits
959 
960  __hostdev__ void setOn() { mFlags = ~Type(0u); }
961  __hostdev__ void setOff() { mFlags = Type(0u); }
962 
963  __hostdev__ void setBitOn(uint8_t bit) { mFlags |= static_cast<Type>(1 << bit); }
964  __hostdev__ void setBitOff(uint8_t bit) { mFlags &= ~static_cast<Type>(1 << bit); }
965 
966  __hostdev__ void setBitOn(std::initializer_list<uint8_t> list)
967  {
968  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
969  }
970  __hostdev__ void setBitOff(std::initializer_list<uint8_t> list)
971  {
972  for (auto bit : list) mFlags &= ~static_cast<Type>(1 << bit);
973  }
974 
975  template<typename MaskT>
976  __hostdev__ void setMaskOn(MaskT mask) { mFlags |= static_cast<Type>(mask); }
977  template<typename MaskT>
978  __hostdev__ void setMaskOff(MaskT mask) { mFlags &= ~static_cast<Type>(mask); }
979 
980  template<typename MaskT>
981  __hostdev__ void setMaskOn(std::initializer_list<MaskT> list)
982  {
983  for (auto mask : list) mFlags |= static_cast<Type>(mask);
984  }
985  template<typename MaskT>
986  __hostdev__ void setMaskOff(std::initializer_list<MaskT> list)
987  {
988  for (auto mask : list) mFlags &= ~static_cast<Type>(mask);
989  }
990 
991  __hostdev__ void setBit(uint8_t bit, bool on) { on ? this->setBitOn(bit) : this->setBitOff(bit); }
992  template<typename MaskT>
993  __hostdev__ void setMask(MaskT mask, bool on) { on ? this->setMaskOn(mask) : this->setMaskOff(mask); }
994 
995  __hostdev__ bool isOn() const { return mFlags == ~Type(0u); }
996  __hostdev__ bool isOff() const { return mFlags == Type(0u); }
997  __hostdev__ bool isBitOn(uint8_t bit) const { return 0 != (mFlags & static_cast<Type>(1 << bit)); }
998  __hostdev__ bool isBitOff(uint8_t bit) const { return 0 == (mFlags & static_cast<Type>(1 << bit)); }
999  template<typename MaskT>
1000  __hostdev__ bool isMaskOn(MaskT mask) const { return 0 != (mFlags & static_cast<Type>(mask)); }
1001  template<typename MaskT>
1002  __hostdev__ bool isMaskOff(MaskT mask) const { return 0 == (mFlags & static_cast<Type>(mask)); }
1003  /// @brief return true if any of the masks in the list are on
1004  template<typename MaskT>
1005  __hostdev__ bool isMaskOn(std::initializer_list<MaskT> list) const
1006  {
1007  for (auto mask : list) {
1008  if (0 != (mFlags & static_cast<Type>(mask))) return true;
1009  }
1010  return false;
1011  }
1012  /// @brief return true if any of the masks in the list are off
1013  template<typename MaskT>
1014  __hostdev__ bool isMaskOff(std::initializer_list<MaskT> list) const
1015  {
1016  for (auto mask : list) {
1017  if (0 == (mFlags & static_cast<Type>(mask))) return true;
1018  }
1019  return false;
1020  }
1021  /// @brief required for backwards compatibility
1022  __hostdev__ BitFlags& operator=(Type n)
1023  {
1024  mFlags = n;
1025  return *this;
1026  }
1027 }; // BitFlags<N>
1028 
1029 // ----------------------------> Mask <--------------------------------------
1030 
1031 /// @brief Bit-mask to encode active states and facilitate sequential iterators
1032 /// and a fast codec for I/O compression.
1033 template<uint32_t LOG2DIM>
1034 class Mask
1035 {
1036 public:
1037  static constexpr uint32_t SIZE = 1U << (3 * LOG2DIM); // Number of bits in mask
1038  static constexpr uint32_t WORD_COUNT = SIZE >> 6; // Number of 64 bit words
1039 
1040  /// @brief Return the memory footprint in bytes of this Mask
1041  __hostdev__ static size_t memUsage() { return sizeof(Mask); }
1042 
1043  /// @brief Return the number of bits available in this Mask
1044  __hostdev__ static uint32_t bitCount() { return SIZE; }
1045 
1046  /// @brief Return the number of machine words used by this Mask
1047  __hostdev__ static uint32_t wordCount() { return WORD_COUNT; }
1048 
1049  /// @brief Return the total number of set bits in this Mask
1050  __hostdev__ uint32_t countOn() const
1051  {
1052  uint32_t sum = 0;
1053  for (const uint64_t *w = mWords, *q = w + WORD_COUNT; w != q; ++w)
1054  sum += util::countOn(*w);
1055  return sum;
1056  }
1057 
1058  /// @brief Return the number of lower set bits in mask up to but excluding the i'th bit
1059  inline __hostdev__ uint32_t countOn(uint32_t i) const
1060  {
1061  uint32_t n = i >> 6, sum = util::countOn(mWords[n] & ((uint64_t(1) << (i & 63u)) - 1u));
1062  for (const uint64_t* w = mWords; n--; ++w)
1063  sum += util::countOn(*w);
1064  return sum;
1065  }
1066 
1067  template<bool On>
1068  class Iterator
1069  {
1070  public:
1072  : mPos(Mask::SIZE)
1073  , mParent(nullptr)
1074  {
1075  }
1076  __hostdev__ Iterator(uint32_t pos, const Mask* parent)
1077  : mPos(pos)
1078  , mParent(parent)
1079  {
1080  }
1081  Iterator& operator=(const Iterator&) = default;
1082  __hostdev__ uint32_t operator*() const { return mPos; }
1083  __hostdev__ uint32_t pos() const { return mPos; }
1084  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1086  {
1087  mPos = mParent->findNext<On>(mPos + 1);
1088  return *this;
1089  }
1091  {
1092  auto tmp = *this;
1093  ++(*this);
1094  return tmp;
1095  }
1096 
1097  private:
1098  uint32_t mPos;
1099  const Mask* mParent;
1100  }; // Member class Iterator
1101 
1103  {
1104  public:
1106  : mPos(pos)
1107  {
1108  }
1109  DenseIterator& operator=(const DenseIterator&) = default;
1110  __hostdev__ uint32_t operator*() const { return mPos; }
1111  __hostdev__ uint32_t pos() const { return mPos; }
1112  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1114  {
1115  ++mPos;
1116  return *this;
1117  }
1119  {
1120  auto tmp = *this;
1121  ++mPos;
1122  return tmp;
1123  }
1124 
1125  private:
1126  uint32_t mPos;
1127  }; // Member class DenseIterator
1128 
1131 
1132  __hostdev__ OnIterator beginOn() const { return OnIterator(this->findFirst<true>(), this); }
1133 
1134  __hostdev__ OffIterator beginOff() const { return OffIterator(this->findFirst<false>(), this); }
1135 
1137 
1138  /// @brief Initialize all bits to zero.
1140  {
1141  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1142  mWords[i] = 0;
1143  }
1145  {
1146  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1147  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1148  mWords[i] = v;
1149  }
1150 
1151  /// @brief Copy constructor
1152  __hostdev__ Mask(const Mask& other)
1153  {
1154  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1155  mWords[i] = other.mWords[i];
1156  }
1157 
1158  /// @brief Return a pointer to the list of words of the bit mask
1159  __hostdev__ uint64_t* words() { return mWords; }
1160  __hostdev__ const uint64_t* words() const { return mWords; }
1161 
1162  /// @brief Assignment operator that works with openvdb::util::NodeMask
1163  template<typename MaskT = Mask>
1165  {
1166  static_assert(sizeof(Mask) == sizeof(MaskT), "Mismatching sizeof");
1167  static_assert(WORD_COUNT == MaskT::WORD_COUNT, "Mismatching word count");
1168  static_assert(LOG2DIM == MaskT::LOG2DIM, "Mismatching LOG2DIM");
1169  auto* src = reinterpret_cast<const uint64_t*>(&other);
1170  for (uint64_t *dst = mWords, *end = dst + WORD_COUNT; dst != end; ++dst)
1171  *dst = *src++;
1172  return *this;
1173  }
1174 
1175  //__hostdev__ Mask& operator=(const Mask& other){return *util::memcpy(this, &other);}
1176  Mask& operator=(const Mask&) = default;
1177 
1178  __hostdev__ bool operator==(const Mask& other) const
1179  {
1180  for (uint32_t i = 0; i < WORD_COUNT; ++i) {
1181  if (mWords[i] != other.mWords[i])
1182  return false;
1183  }
1184  return true;
1185  }
1186 
1187  __hostdev__ bool operator!=(const Mask& other) const { return !((*this) == other); }
1188 
1189  /// @brief Return true if the given bit is set.
1190  __hostdev__ bool isOn(uint32_t n) const { return 0 != (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1191 
1192  /// @brief Return true if the given bit is NOT set.
1193  __hostdev__ bool isOff(uint32_t n) const { return 0 == (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1194 
1195  /// @brief Return true if all the bits are set in this Mask.
1196  __hostdev__ bool isOn() const
1197  {
1198  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1199  if (mWords[i] != ~uint64_t(0))
1200  return false;
1201  return true;
1202  }
1203 
1204  /// @brief Return true if none of the bits are set in this Mask.
1205  __hostdev__ bool isOff() const
1206  {
1207  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1208  if (mWords[i] != uint64_t(0))
1209  return false;
1210  return true;
1211  }
1212 
1213  /// @brief Set the specified bit on.
1214  __hostdev__ void setOn(uint32_t n) { mWords[n >> 6] |= uint64_t(1) << (n & 63); }
1215  /// @brief Set the specified bit off.
1216  __hostdev__ void setOff(uint32_t n) { mWords[n >> 6] &= ~(uint64_t(1) << (n & 63)); }
1217 
1218 #if defined(__CUDACC__) // the following functions only run on the GPU!
1219  __device__ inline void setOnAtomic(uint32_t n)
1220  {
1221  atomicOr(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), 1ull << (n & 63));
1222  }
1223  __device__ inline void setOffAtomic(uint32_t n)
1224  {
1225  atomicAnd(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), ~(1ull << (n & 63)));
1226  }
1227  __device__ inline void setAtomic(uint32_t n, bool on)
1228  {
1229  on ? this->setOnAtomic(n) : this->setOffAtomic(n);
1230  }
1231 #endif
1232  /// @brief Set the specified bit on or off.
1233  __hostdev__ void set(uint32_t n, bool on)
1234  {
1235 #if 1 // switch between branchless
1236  auto& word = mWords[n >> 6];
1237  n &= 63;
1238  word &= ~(uint64_t(1) << n);
1239  word |= uint64_t(on) << n;
1240 #else
1241  on ? this->setOn(n) : this->setOff(n);
1242 #endif
1243  }
1244 
1245  /// @brief Set all bits on
1247  {
1248  for (uint32_t i = 0; i < WORD_COUNT; ++i)mWords[i] = ~uint64_t(0);
1249  }
1250 
1251  /// @brief Set all bits off
1253  {
1254  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = uint64_t(0);
1255  }
1256 
1257  /// @brief Set all bits off
1258  __hostdev__ void set(bool on)
1259  {
1260  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1261  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = v;
1262  }
1263  /// brief Toggle the state of all bits in the mask
1265  {
1266  uint32_t n = WORD_COUNT;
1267  for (auto* w = mWords; n--; ++w) *w = ~*w;
1268  }
1269  __hostdev__ void toggle(uint32_t n) { mWords[n >> 6] ^= uint64_t(1) << (n & 63); }
1270 
1271  /// @brief Bitwise intersection
1273  {
1274  uint64_t* w1 = mWords;
1275  const uint64_t* w2 = other.mWords;
1276  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= *w2;
1277  return *this;
1278  }
1279  /// @brief Bitwise union
1281  {
1282  uint64_t* w1 = mWords;
1283  const uint64_t* w2 = other.mWords;
1284  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 |= *w2;
1285  return *this;
1286  }
1287  /// @brief Bitwise difference
1289  {
1290  uint64_t* w1 = mWords;
1291  const uint64_t* w2 = other.mWords;
1292  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= ~*w2;
1293  return *this;
1294  }
1295  /// @brief Bitwise XOR
1297  {
1298  uint64_t* w1 = mWords;
1299  const uint64_t* w2 = other.mWords;
1300  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 ^= *w2;
1301  return *this;
1302  }
1303 
1305  template<bool ON>
1306  __hostdev__ uint32_t findFirst() const
1307  {
1308  uint32_t n = 0u;
1309  const uint64_t* w = mWords;
1310  for (; n < WORD_COUNT && !(ON ? *w : ~*w); ++w, ++n);
1311  return n < WORD_COUNT ? (n << 6) + util::findLowestOn(ON ? *w : ~*w) : SIZE;
1312  }
1313 
1315  template<bool ON>
1316  __hostdev__ uint32_t findNext(uint32_t start) const
1317  {
1318  uint32_t n = start >> 6; // initiate
1319  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1320  uint32_t m = start & 63u;
1321  uint64_t b = ON ? mWords[n] : ~mWords[n];
1322  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1323  b &= ~uint64_t(0u) << m; // mask out lower bits
1324  while (!b && ++n < WORD_COUNT) b = ON ? mWords[n] : ~mWords[n]; // find next non-zero word
1325  return b ? (n << 6) + util::findLowestOn(b) : SIZE; // catch last word=0
1326  }
1327 
1329  template<bool ON>
1330  __hostdev__ uint32_t findPrev(uint32_t start) const
1331  {
1332  uint32_t n = start >> 6; // initiate
1333  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1334  uint32_t m = start & 63u;
1335  uint64_t b = ON ? mWords[n] : ~mWords[n];
1336  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1337  b &= (uint64_t(1u) << m) - 1u; // mask out higher bits
1338  while (!b && n) b = ON ? mWords[--n] : ~mWords[--n]; // find previous non-zero word
1339  return b ? (n << 6) + util::findHighestOn(b) : SIZE; // catch first word=0
1340  }
1341 
1342 private:
1343  uint64_t mWords[WORD_COUNT];
1344 }; // Mask class
1345 
1346 // ----------------------------> Map <--------------------------------------
1347 
1348 /// @brief Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation
1349 struct Map
1350 { // 264B (not 32B aligned!)
1351  float mMatF[9]; // 9*4B <- 3x3 matrix
1352  float mInvMatF[9]; // 9*4B <- 3x3 matrix
1353  float mVecF[3]; // 3*4B <- translation
1354  float mTaperF; // 4B, placeholder for taper value
1355  double mMatD[9]; // 9*8B <- 3x3 matrix
1356  double mInvMatD[9]; // 9*8B <- 3x3 matrix
1357  double mVecD[3]; // 3*8B <- translation
1358  double mTaperD; // 8B, placeholder for taper value
1359 
1360  /// @brief Default constructor for the identity map
1362  : mMatF{ 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1363  , mInvMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1364  , mVecF{0.0f, 0.0f, 0.0f}
1365  , mTaperF{1.0f}
1366  , mMatD{ 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1367  , mInvMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1368  , mVecD{0.0, 0.0, 0.0}
1369  , mTaperD{1.0}
1370  {
1371  }
1372  __hostdev__ Map(double s, const Vec3d& t = Vec3d(0.0, 0.0, 0.0))
1373  : mMatF{float(s), 0.0f, 0.0f, 0.0f, float(s), 0.0f, 0.0f, 0.0f, float(s)}
1374  , mInvMatF{1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s)}
1375  , mVecF{float(t[0]), float(t[1]), float(t[2])}
1376  , mTaperF{1.0f}
1377  , mMatD{s, 0.0, 0.0, 0.0, s, 0.0, 0.0, 0.0, s}
1378  , mInvMatD{1.0 / s, 0.0, 0.0, 0.0, 1.0 / s, 0.0, 0.0, 0.0, 1.0 / s}
1379  , mVecD{t[0], t[1], t[2]}
1380  , mTaperD{1.0}
1381  {
1382  }
1383 
1384  /// @brief Initialize the member data from 3x3 or 4x4 matrices
1385  /// @note This is not _hostdev__ since then MatT=openvdb::Mat4d will produce warnings
1386  template<typename MatT, typename Vec3T>
1387  void set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper = 1.0);
1388 
1389  /// @brief Initialize the member data from 4x4 matrices
1390  /// @note The last (4th) row of invMat is actually ignored.
1391  /// This is not _hostdev__ since then Mat4T=openvdb::Mat4d will produce warnings
1392  template<typename Mat4T>
1393  void set(const Mat4T& mat, const Mat4T& invMat, double taper = 1.0) { this->set(mat, invMat, mat[3], taper); }
1394 
1395  template<typename Vec3T>
1396  void set(double scale, const Vec3T& translation, double taper = 1.0);
1397 
1398  /// @brief Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
1399  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1400  /// @tparam Vec3T Template type of the 3D vector to be mapped
1401  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1402  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1403  template<typename Vec3T>
1404  __hostdev__ Vec3T applyMap(const Vec3T& ijk) const { return math::matMult(mMatD, mVecD, ijk); }
1405 
1406  /// @brief Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
1407  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1408  /// @tparam Vec3T Template type of the 3D vector to be mapped
1409  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1410  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1411  template<typename Vec3T>
1412  __hostdev__ Vec3T applyMapF(const Vec3T& ijk) const { return math::matMult(mMatF, mVecF, ijk); }
1413 
1414  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1415  /// e.g. scale and rotation WITHOUT translation.
1416  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1417  /// @tparam Vec3T Template type of the 3D vector to be mapped
1418  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1419  /// @return linear forward 3x3 mapping of the input vector
1420  template<typename Vec3T>
1421  __hostdev__ Vec3T applyJacobian(const Vec3T& ijk) const { return math::matMult(mMatD, ijk); }
1422 
1423  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1424  /// e.g. scale and rotation WITHOUT translation.
1425  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1426  /// @tparam Vec3T Template type of the 3D vector to be mapped
1427  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1428  /// @return linear forward 3x3 mapping of the input vector
1429  template<typename Vec3T>
1430  __hostdev__ Vec3T applyJacobianF(const Vec3T& ijk) const { return math::matMult(mMatF, ijk); }
1431 
1432  /// @brief Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
1433  /// @note Typically this operation is used for the world -> index mapping
1434  /// @tparam Vec3T Template type of the 3D vector to be mapped
1435  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1436  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1437  template<typename Vec3T>
1438  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const
1439  {
1440  return math::matMult(mInvMatD, Vec3T(xyz[0] - mVecD[0], xyz[1] - mVecD[1], xyz[2] - mVecD[2]));
1441  }
1442 
1443  /// @brief Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
1444  /// @note Typically this operation is used for the world -> index mapping
1445  /// @tparam Vec3T Template type of the 3D vector to be mapped
1446  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1447  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1448  template<typename Vec3T>
1449  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const
1450  {
1451  return math::matMult(mInvMatF, Vec3T(xyz[0] - mVecF[0], xyz[1] - mVecF[1], xyz[2] - mVecF[2]));
1452  }
1453 
1454  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1455  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1456  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1457  /// @tparam Vec3T Template type of the 3D vector to be mapped
1458  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1459  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1460  template<typename Vec3T>
1461  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return math::matMult(mInvMatD, xyz); }
1462 
1463  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1464  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1465  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1466  /// @tparam Vec3T Template type of the 3D vector to be mapped
1467  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1468  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1469  template<typename Vec3T>
1470  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return math::matMult(mInvMatF, xyz); }
1471 
1472  /// @brief Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1473  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1474  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1475  /// @tparam Vec3T Template type of the 3D vector to be mapped
1476  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1477  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1478  template<typename Vec3T>
1479  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return math::matMultT(mInvMatD, xyz); }
1480  template<typename Vec3T>
1481  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return math::matMultT(mInvMatF, xyz); }
1482 
1483  /// @brief Return a voxels size in each coordinate direction, measured at the origin
1484  __hostdev__ Vec3d getVoxelSize() const { return this->applyMap(Vec3d(1)) - this->applyMap(Vec3d(0)); }
1485 }; // Map
1486 
1487 template<typename MatT, typename Vec3T>
1488 inline void Map::set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper)
1489 {
1490  float * mf = mMatF, *vf = mVecF, *mif = mInvMatF;
1491  double *md = mMatD, *vd = mVecD, *mid = mInvMatD;
1492  mTaperF = static_cast<float>(taper);
1493  mTaperD = taper;
1494  for (int i = 0; i < 3; ++i) {
1495  *vd++ = translate[i]; //translation
1496  *vf++ = static_cast<float>(translate[i]); //translation
1497  for (int j = 0; j < 3; ++j) {
1498  *md++ = mat[j][i]; //transposed
1499  *mid++ = invMat[j][i];
1500  *mf++ = static_cast<float>(mat[j][i]); //transposed
1501  *mif++ = static_cast<float>(invMat[j][i]);
1502  }
1503  }
1504 }
1505 
1506 template<typename Vec3T>
1507 inline void Map::set(double dx, const Vec3T& trans, double taper)
1508 {
1509  NANOVDB_ASSERT(dx > 0.0);
1510  const double mat[3][3] = { {dx, 0.0, 0.0}, // row 0
1511  {0.0, dx, 0.0}, // row 1
1512  {0.0, 0.0, dx} }; // row 2
1513  const double idx = 1.0 / dx;
1514  const double invMat[3][3] = { {idx, 0.0, 0.0}, // row 0
1515  {0.0, idx, 0.0}, // row 1
1516  {0.0, 0.0, idx} }; // row 2
1517  this->set(mat, invMat, trans, taper);
1518 }
1519 
1520 // ----------------------------> GridBlindMetaData <--------------------------------------
1521 
1522 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridBlindMetaData
1523 { // 288 bytes
1524  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less!
1525  int64_t mDataOffset; // byte offset to the blind data, relative to this GridBlindMetaData.
1526  uint64_t mValueCount; // number of blind values, e.g. point count
1527  uint32_t mValueSize;// byte size of each value, e.g. 4 if mDataType=Float and 1 if mDataType=Unknown since that amounts to char
1528  GridBlindDataSemantic mSemantic; // semantic meaning of the data.
1530  GridType mDataType; // 4 bytes
1531  char mName[MaxNameSize]; // note this includes the NULL termination
1532  // no padding required for 32 byte alignment
1533 
1534  // disallow copy-construction since methods like blindData and getBlindData uses the this pointer!
1535  GridBlindMetaData(const GridBlindMetaData&) = delete;
1536 
1537  // disallow copy-assignment since methods like blindData and getBlindData uses the this pointer!
1538  const GridBlindMetaData& operator=(const GridBlindMetaData&) = delete;
1539 
1540  __hostdev__ void setBlindData(void* blindData) { mDataOffset = util::PtrDiff(blindData, this); }
1541 
1542  // unsafe
1543  __hostdev__ const void* blindData() const {return util::PtrAdd(this, mDataOffset);}
1544 
1545  /// @brief Get a const pointer to the blind data represented by this meta data
1546  /// @tparam BlindDataT Expected value type of the blind data.
1547  /// @return Returns NULL if mGridType!=toGridType<BlindDataT>(), else a const point of type BlindDataT.
1548  /// @note Use mDataType=Unknown if BlindDataT is a custom data type unknown to NanoVDB.
1549  template<typename BlindDataT>
1550  __hostdev__ const BlindDataT* getBlindData() const
1551  {
1552  //if (mDataType != toGridType<BlindDataT>()) printf("getBlindData mismatch\n");
1553  return mDataType == toGridType<BlindDataT>() ? util::PtrAdd<BlindDataT>(this, mDataOffset) : nullptr;
1554  }
1555 
1556  /// @brief return true if this meta data has a valid combination of semantic, class and value tags
1557  __hostdev__ bool isValid() const
1558  {
1559  auto check = [&]()->bool{
1560  switch (mDataType){
1561  case GridType::Unknown: return mValueSize==1u;// i.e. we encode data as mValueCount chars
1562  case GridType::Float: return mValueSize==4u;
1563  case GridType::Double: return mValueSize==8u;
1564  case GridType::Int16: return mValueSize==2u;
1565  case GridType::Int32: return mValueSize==4u;
1566  case GridType::Int64: return mValueSize==8u;
1567  case GridType::Vec3f: return mValueSize==12u;
1568  case GridType::Vec3d: return mValueSize==24u;
1569  case GridType::Half: return mValueSize==2u;
1570  case GridType::RGBA8: return mValueSize==4u;
1571  case GridType::Fp8: return mValueSize==1u;
1572  case GridType::Fp16: return mValueSize==2u;
1573  case GridType::Vec4f: return mValueSize==16u;
1574  case GridType::Vec4d: return mValueSize==32u;
1575  case GridType::Vec3u8: return mValueSize==3u;
1576  case GridType::Vec3u16: return mValueSize==6u;
1577  default: return true;}// all other combinations are valid
1578  };
1579  return nanovdb::isValid(mDataClass, mSemantic, mDataType) && check();
1580  }
1581 
1582  /// @brief return size in bytes of the blind data represented by this blind meta data
1583  /// @note This size includes possible padding for 32 byte alignment. The actual amount
1584  /// of bind data is mValueCount * mValueSize
1585  __hostdev__ uint64_t blindDataSize() const
1586  {
1587  return math::AlignUp<NANOVDB_DATA_ALIGNMENT>(mValueCount * mValueSize);
1588  }
1589 }; // GridBlindMetaData
1590 
1591 // ----------------------------> NodeTrait <--------------------------------------
1592 
1593 /// @brief Struct to derive node type from its level in a given
1594 /// grid, tree or root while preserving constness
1595 template<typename GridOrTreeOrRootT, int LEVEL>
1596 struct NodeTrait;
1597 
1598 // Partial template specialization of above Node struct
1599 template<typename GridOrTreeOrRootT>
1600 struct NodeTrait<GridOrTreeOrRootT, 0>
1601 {
1602  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1603  using Type = typename GridOrTreeOrRootT::LeafNodeType;
1604  using type = typename GridOrTreeOrRootT::LeafNodeType;
1605 };
1606 template<typename GridOrTreeOrRootT>
1607 struct NodeTrait<const GridOrTreeOrRootT, 0>
1608 {
1609  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1610  using Type = const typename GridOrTreeOrRootT::LeafNodeType;
1611  using type = const typename GridOrTreeOrRootT::LeafNodeType;
1612 };
1613 
1614 template<typename GridOrTreeOrRootT>
1615 struct NodeTrait<GridOrTreeOrRootT, 1>
1616 {
1617  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1618  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1619  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1620 };
1621 template<typename GridOrTreeOrRootT>
1622 struct NodeTrait<const GridOrTreeOrRootT, 1>
1623 {
1624  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1625  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1626  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1627 };
1628 template<typename GridOrTreeOrRootT>
1629 struct NodeTrait<GridOrTreeOrRootT, 2>
1630 {
1631  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1632  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1633  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1634 };
1635 template<typename GridOrTreeOrRootT>
1636 struct NodeTrait<const GridOrTreeOrRootT, 2>
1637 {
1638  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1639  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1640  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1641 };
1642 template<typename GridOrTreeOrRootT>
1643 struct NodeTrait<GridOrTreeOrRootT, 3>
1644 {
1645  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1646  using Type = typename GridOrTreeOrRootT::RootNodeType;
1647  using type = typename GridOrTreeOrRootT::RootNodeType;
1648 };
1649 
1650 template<typename GridOrTreeOrRootT>
1651 struct NodeTrait<const GridOrTreeOrRootT, 3>
1652 {
1653  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1654  using Type = const typename GridOrTreeOrRootT::RootNodeType;
1655  using type = const typename GridOrTreeOrRootT::RootNodeType;
1656 };
1657 
1658 // ----------------------------> Froward decelerations of random access methods <--------------------------------------
1659 
1660 template<typename BuildT>
1661 struct GetValue;
1662 template<typename BuildT>
1663 struct SetValue;
1664 template<typename BuildT>
1665 struct SetVoxel;
1666 template<typename BuildT>
1667 struct GetState;
1668 template<typename BuildT>
1669 struct GetDim;
1670 template<typename BuildT>
1671 struct GetLeaf;
1672 template<typename BuildT>
1673 struct ProbeValue;
1674 template<typename BuildT>
1676 
1677 // ----------------------------> CheckMode <----------------------------------
1678 
1679 /// @brief List of different modes for computing for a checksum
1680 enum class CheckMode : uint32_t { Disable = 0, // no computation
1681  Empty = 0,
1682  Half = 1,
1683  Partial = 1, // fast but approximate
1684  Default = 1, // defaults to Partial
1685  Full = 2, // slow but accurate
1686  End = 3, // marks the end of the enum list
1687  StrLen = 9 + End};
1688 
1689 /// @brief Prints CheckMode enum to a c-string
1690 /// @param dst Destination c-string
1691 /// @param mode CheckMode enum to be converted to string
1692 /// @return destinations string @c dst
1693 __hostdev__ inline char* toStr(char *dst, CheckMode mode)
1694 {
1695  switch (mode){
1696  case CheckMode::Half: return util::strcpy(dst, "half");
1697  case CheckMode::Full: return util::strcpy(dst, "full");
1698  default: return util::strcpy(dst, "disabled");
1699  }
1700 }
1701 
1702 // ----------------------------> Checksum <----------------------------------
1703 
1704 /// @brief Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data
1705 /// and one for the remaining grid nodes.
1707 {
1708  /// Three types of checksums:
1709  /// 1) Empty: all 64 bits are on (used to signify a disabled or undefined checksum)
1710  /// 2) Half: Upper 32 bits are on and not all of lower 32 bits are on (lower 32 bits checksum head of grid)
1711  /// 3) Full: Not all of the 64 bits are one (lower 32 bits checksum head of grid and upper 32 bits checksum tail of grid)
1712  union { uint32_t mCRC32[2]; uint64_t mCRC64; };// mCRC32[0] is checksum of Grid, Tree and Root, and mCRC32[1] is checksum of nodes
1713 
1714 public:
1715 
1716  static constexpr uint32_t EMPTY32 = ~uint32_t{0};
1717  static constexpr uint64_t EMPTY64 = ~uint64_t(0);
1718 
1719  /// @brief default constructor initiates checksum to EMPTY
1720  __hostdev__ Checksum() : mCRC64{EMPTY64} {}
1721 
1722  /// @brief Constructor that allows the two 32bit checksums to be initiated explicitly
1723  /// @param head Initial 32bit CRC checksum of grid, tree and root data
1724  /// @param tail Initial 32bit CRC checksum of all the nodes and blind data
1725  __hostdev__ Checksum(uint32_t head, uint32_t tail) : mCRC32{head, tail} {}
1726 
1727  /// @brief
1728  /// @param checksum
1729  /// @param mode
1730  __hostdev__ Checksum(uint64_t checksum, CheckMode mode = CheckMode::Full) : mCRC64{mode == CheckMode::Disable ? EMPTY64 : checksum}
1731  {
1732  if (mode == CheckMode::Partial) mCRC32[1] = EMPTY32;
1733  }
1734 
1735  /// @brief return the 64 bit checksum of this instance
1736  [[deprecated("Use Checksum::data instead.")]]
1737  __hostdev__ uint64_t checksum() const { return mCRC64; }
1738  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1739  __hostdev__ uint32_t& checksum(int i) {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1740  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1741  __hostdev__ uint32_t checksum(int i) const {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1742 
1743  __hostdev__ uint64_t full() const { return mCRC64; }
1744  __hostdev__ uint64_t& full() { return mCRC64; }
1745  __hostdev__ uint32_t head() const { return mCRC32[0]; }
1746  __hostdev__ uint32_t& head() { return mCRC32[0]; }
1747  __hostdev__ uint32_t tail() const { return mCRC32[1]; }
1748  __hostdev__ uint32_t& tail() { return mCRC32[1]; }
1749 
1750  /// @brief return true if the 64 bit checksum is partial, i.e. of head only
1751  [[deprecated("Use Checksum::isHalf instead.")]]
1752  __hostdev__ bool isPartial() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1753  __hostdev__ bool isHalf() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1754 
1755  /// @brief return true if the 64 bit checksum is fill, i.e. of both had and nodes
1756  __hostdev__ bool isFull() const { return mCRC64 != EMPTY64 && mCRC32[1] != EMPTY32; }
1757 
1758  /// @brief return true if the 64 bit checksum is disables (unset)
1759  __hostdev__ bool isEmpty() const { return mCRC64 == EMPTY64; }
1760 
1761  __hostdev__ void disable() { mCRC64 = EMPTY64; }
1762 
1763  /// @brief return the mode of the 64 bit checksum
1765  {
1766  return mCRC64 == EMPTY64 ? CheckMode::Disable :
1767  mCRC32[1] == EMPTY32 ? CheckMode::Partial : CheckMode::Full;
1768  }
1769 
1770  /// @brief return true if the checksums are identical
1771  /// @param rhs other Checksum
1772  __hostdev__ bool operator==(const Checksum &rhs) const {return mCRC64 == rhs.mCRC64;}
1773 
1774  /// @brief return true if the checksums are not identical
1775  /// @param rhs other Checksum
1776  __hostdev__ bool operator!=(const Checksum &rhs) const {return mCRC64 != rhs.mCRC64;}
1777 };// Checksum
1778 
1779 /// @brief Maps 64 bit checksum to CheckMode enum
1780 /// @param checksum 64 bit checksum with two CRC32 codes
1781 /// @return CheckMode enum
1782 __hostdev__ inline CheckMode toCheckMode(const Checksum &checksum){return checksum.mode();}
1783 
1784 // ----------------------------> Grid <--------------------------------------
1785 
1786 /*
1787  The following class and comment is for internal use only
1788 
1789  Memory layout:
1790 
1791  Grid -> 39 x double (world bbox and affine transformation)
1792  Tree -> Root 3 x ValueType + int32_t + N x Tiles (background,min,max,tileCount + tileCount x Tiles)
1793 
1794  N2 upper InternalNodes each with 2 bit masks, N2 tiles, and min/max values
1795 
1796  N1 lower InternalNodes each with 2 bit masks, N1 tiles, and min/max values
1797 
1798  N0 LeafNodes each with a bit mask, N0 ValueTypes and min/max
1799 
1800  Example layout: ("---" implies it has a custom offset, "..." implies zero or more)
1801  [GridData][TreeData]---[RootData][ROOT TILES...]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
1802 */
1803 
1804 /// @brief Struct with all the member data of the Grid (useful during serialization of an openvdb grid)
1805 ///
1806 /// @note The transform is assumed to be affine (so linear) and have uniform scale! So frustum transforms
1807 /// and non-uniform scaling are not supported (primarily because they complicate ray-tracing in index space)
1808 ///
1809 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
1810 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridData
1811 { // sizeof(GridData) = 672B
1812  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less
1813  uint64_t mMagic; // 8B (0) magic to validate it is valid grid data.
1814  Checksum mChecksum; // 8B (8). Checksum of grid buffer.
1815  Version mVersion; // 4B (16) major, minor, and patch version numbers
1816  BitFlags<32> mFlags; // 4B (20). flags for grid.
1817  uint32_t mGridIndex; // 4B (24). Index of this grid in the buffer
1818  uint32_t mGridCount; // 4B (28). Total number of grids in the buffer
1819  uint64_t mGridSize; // 8B (32). byte count of this entire grid occupied in the buffer.
1820  char mGridName[MaxNameSize]; // 256B (40)
1821  Map mMap; // 264B (296). affine transformation between index and world space in both single and double precision
1822  Vec3dBBox mWorldBBox; // 48B (560). floating-point AABB of active values in WORLD SPACE (2 x 3 doubles)
1823  Vec3d mVoxelSize; // 24B (608). size of a voxel in world units
1824  GridClass mGridClass; // 4B (632).
1825  GridType mGridType; // 4B (636).
1826  int64_t mBlindMetadataOffset; // 8B (640). offset to beginning of GridBlindMetaData structures that follow this grid.
1827  uint32_t mBlindMetadataCount; // 4B (648). count of GridBlindMetaData structures that follow this grid.
1828  uint32_t mData0; // 4B (652) unused
1829  uint64_t mData1; // 8B (656) is use for the total number of values indexed by an IndexGrid
1830  uint64_t mData2; // 8B (664) padding to 32 B alignment
1831  /// @brief Use this method to initiate most member data
1832  GridData& operator=(const GridData&) = default;
1833  //__hostdev__ GridData& operator=(const GridData& other){return *util::memcpy(this, &other);}
1834  __hostdev__ void init(std::initializer_list<GridFlags> list = {GridFlags::IsBreadthFirst},
1835  uint64_t gridSize = 0u,
1836  const Map& map = Map(),
1837  GridType gridType = GridType::Unknown,
1838  GridClass gridClass = GridClass::Unknown)
1839  {
1840 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1841  mMagic = NANOVDB_MAGIC_GRID;
1842 #else
1843  mMagic = NANOVDB_MAGIC_NUMB;
1844 #endif
1845  mChecksum.disable();// all 64 bits ON means checksum is disabled
1846  mVersion = Version();
1847  mFlags.initMask(list);
1848  mGridIndex = 0u;
1849  mGridCount = 1u;
1850  mGridSize = gridSize;
1851  mGridName[0] = '\0';
1852  mMap = map;
1853  mWorldBBox = Vec3dBBox();// invalid bbox
1854  mVoxelSize = map.getVoxelSize();
1855  mGridClass = gridClass;
1856  mGridType = gridType;
1857  mBlindMetadataOffset = mGridSize; // i.e. no blind data
1858  mBlindMetadataCount = 0u; // i.e. no blind data
1859  mData0 = 0u; // zero padding
1860  mData1 = 0u; // only used for index and point grids
1861  mData2 = NANOVDB_MAGIC_GRID; // since version 32.6.0 (will change in the future)
1862  }
1863  /// @brief return true if the magic number and the version are both valid
1864  __hostdev__ bool isValid() const {
1865  // Before v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and mData2 was undefined
1866  // For v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and toMagic(mData2) = MagicType::NanoGrid
1867  // After v32.7.X: toMagic(mMagic) = MagicType::NanoGrid and mData2 will again be undefined
1868  const MagicType magic = toMagic(mMagic);
1869  if (magic == MagicType::NanoGrid || toMagic(mData2) == MagicType::NanoGrid) return true;
1870  bool test = magic == MagicType::NanoVDB;// could be GridData or io::FileHeader
1871  if (test) test = mVersion.isCompatible();
1872  if (test) test = mGridCount > 0u && mGridIndex < mGridCount;
1873  if (test) test = mGridClass < GridClass::End && mGridType < GridType::End;
1874  return test;
1875  }
1876  // Set and unset various bit flags
1877  __hostdev__ void setMinMaxOn(bool on = true) { mFlags.setMask(GridFlags::HasMinMax, on); }
1878  __hostdev__ void setBBoxOn(bool on = true) { mFlags.setMask(GridFlags::HasBBox, on); }
1879  __hostdev__ void setLongGridNameOn(bool on = true) { mFlags.setMask(GridFlags::HasLongGridName, on); }
1880  __hostdev__ void setAverageOn(bool on = true) { mFlags.setMask(GridFlags::HasAverage, on); }
1881  __hostdev__ void setStdDeviationOn(bool on = true) { mFlags.setMask(GridFlags::HasStdDeviation, on); }
1882  __hostdev__ bool setGridName(const char* src)
1883  {
1884  const bool success = (util::strncpy(mGridName, src, MaxNameSize)[MaxNameSize-1] == '\0');
1885  if (!success) mGridName[MaxNameSize-1] = '\0';
1886  return success; // returns true if input grid name is NOT longer than MaxNameSize characters
1887  }
1888  // Affine transformations based on double precision
1889  template<typename Vec3T>
1890  __hostdev__ Vec3T applyMap(const Vec3T& xyz) const { return mMap.applyMap(xyz); } // Pos: index -> world
1891  template<typename Vec3T>
1892  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const { return mMap.applyInverseMap(xyz); } // Pos: world -> index
1893  template<typename Vec3T>
1894  __hostdev__ Vec3T applyJacobian(const Vec3T& xyz) const { return mMap.applyJacobian(xyz); } // Dir: index -> world
1895  template<typename Vec3T>
1896  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return mMap.applyInverseJacobian(xyz); } // Dir: world -> index
1897  template<typename Vec3T>
1898  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return mMap.applyIJT(xyz); }
1899  // Affine transformations based on single precision
1900  template<typename Vec3T>
1901  __hostdev__ Vec3T applyMapF(const Vec3T& xyz) const { return mMap.applyMapF(xyz); } // Pos: index -> world
1902  template<typename Vec3T>
1903  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const { return mMap.applyInverseMapF(xyz); } // Pos: world -> index
1904  template<typename Vec3T>
1905  __hostdev__ Vec3T applyJacobianF(const Vec3T& xyz) const { return mMap.applyJacobianF(xyz); } // Dir: index -> world
1906  template<typename Vec3T>
1907  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return mMap.applyInverseJacobianF(xyz); } // Dir: world -> index
1908  template<typename Vec3T>
1909  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return mMap.applyIJTF(xyz); }
1910 
1911  // @brief Return a non-const void pointer to the tree
1912  __hostdev__ void* treePtr() { return this + 1; }// TreeData is always right after GridData
1913 
1914  // @brief Return a const void pointer to the tree
1915  __hostdev__ const void* treePtr() const { return this + 1; }// TreeData is always right after GridData
1916 
1917  /// @brief Return a non-const void pointer to the first node at @c LEVEL
1918  /// @tparam LEVEL Level of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
1919  template <uint32_t LEVEL>
1920  __hostdev__ const void* nodePtr() const
1921  {
1922  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
1923  const void *treeData = this + 1;// TreeData is always right after GridData
1924  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
1925  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
1926  }
1927 
1928  /// @brief Return a non-const void pointer to the first node at @c LEVEL
1929  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
1930  /// @warning If not nodes exist at @c LEVEL NULL is returned
1931  template <uint32_t LEVEL>
1933  {
1934  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
1935  void *treeData = this + 1;// TreeData is always right after GridData
1936  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
1937  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
1938  }
1939 
1940  /// @brief Return number of nodes at @c LEVEL
1941  /// @tparam Level of the node. LEVEL 0 means leaf node and LEVEL 2 means upper node
1942  template <uint32_t LEVEL>
1943  __hostdev__ uint32_t nodeCount() const
1944  {
1945  static_assert(LEVEL >= 0 && LEVEL < 3, "invalid LEVEL template parameter");
1946  return *util::PtrAdd<uint32_t>(this + 1, 4*(8 + LEVEL));// TreeData is always right after GridData
1947  }
1948 
1949  /// @brief Returns a const reference to the blindMetaData at the specified linear offset.
1950  ///
1951  /// @warning The linear offset is assumed to be in the valid range
1953  {
1954  NANOVDB_ASSERT(n < mBlindMetadataCount);
1955  return util::PtrAdd<GridBlindMetaData>(this, mBlindMetadataOffset) + n;
1956  }
1957 
1958  __hostdev__ const char* gridName() const
1959  {
1960  if (mFlags.isMaskOn(GridFlags::HasLongGridName)) {// search for first blind meta data that contains a name
1961  NANOVDB_ASSERT(mBlindMetadataCount > 0);
1962  for (uint32_t i = 0; i < mBlindMetadataCount; ++i) {
1963  const auto* metaData = this->blindMetaData(i);// EXTREMELY important to be a pointer
1964  if (metaData->mDataClass == GridBlindDataClass::GridName) {
1965  NANOVDB_ASSERT(metaData->mDataType == GridType::Unknown);
1966  return metaData->template getBlindData<const char>();
1967  }
1968  }
1969  NANOVDB_ASSERT(false); // should never hit this!
1970  }
1971  return mGridName;
1972  }
1973 
1974  /// @brief Return memory usage in bytes for this class only.
1975  __hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
1976 
1977  /// @brief return AABB of active values in world space
1978  __hostdev__ const Vec3dBBox& worldBBox() const { return mWorldBBox; }
1979 
1980  /// @brief return AABB of active values in index space
1981  __hostdev__ const CoordBBox& indexBBox() const {return *(const CoordBBox*)(this->nodePtr<3>());}
1982 
1983  /// @brief return the root table has size
1984  __hostdev__ uint32_t rootTableSize() const
1985  {
1986  const void *root = this->nodePtr<3>();
1987  return root ? *util::PtrAdd<uint32_t>(root, sizeof(CoordBBox)) : 0u;
1988  }
1989 
1990  /// @brief test if the grid is empty, e.i the root table has size 0
1991  /// @return true if this grid contains not data whatsoever
1992  __hostdev__ bool isEmpty() const {return this->rootTableSize() == 0u;}
1993 
1994  /// @brief return true if RootData follows TreeData in memory without any extra padding
1995  /// @details TreeData is always following right after GridData, but the same might not be true for RootData
1996  __hostdev__ bool isRootConnected() const { return *(const uint64_t*)((const char*)(this + 1) + 24) == 64u;}
1997 }; // GridData
1998 
1999 // Forward declaration of accelerated random access class
2000 template<typename BuildT, int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1>
2002 
2003 template<typename BuildT>
2005 
2006 /// @brief Highest level of the data structure. Contains a tree and a world->index
2007 /// transform (that currently only supports uniform scaling and translation).
2008 ///
2009 /// @note This the API of this class to interface with client code
2010 template<typename TreeT>
2011 class Grid : public GridData
2012 {
2013 public:
2014  using TreeType = TreeT;
2015  using RootType = typename TreeT::RootType;
2017  using UpperNodeType = typename RootNodeType::ChildNodeType;
2018  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2019  using LeafNodeType = typename RootType::LeafNodeType;
2021  using ValueType = typename TreeT::ValueType;
2022  using BuildType = typename TreeT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2023  using CoordType = typename TreeT::CoordType;
2025 
2026  /// @brief Disallow constructions, copy and assignment
2027  ///
2028  /// @note Only a Serializer, defined elsewhere, can instantiate this class
2029  Grid(const Grid&) = delete;
2030  Grid& operator=(const Grid&) = delete;
2031  ~Grid() = delete;
2032 
2033  __hostdev__ Version version() const { return DataType::mVersion; }
2034 
2035  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2036 
2037  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2038 
2039  /// @brief Return memory usage in bytes for this class only.
2040  //__hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2041 
2042  /// @brief Return the memory footprint of the entire grid, i.e. including all nodes and blind data
2043  __hostdev__ uint64_t gridSize() const { return DataType::mGridSize; }
2044 
2045  /// @brief Return index of this grid in the buffer
2046  __hostdev__ uint32_t gridIndex() const { return DataType::mGridIndex; }
2047 
2048  /// @brief Return total number of grids in the buffer
2049  __hostdev__ uint32_t gridCount() const { return DataType::mGridCount; }
2050 
2051  /// @brief @brief Return the total number of values indexed by this IndexGrid
2052  ///
2053  /// @note This method is only defined for IndexGrid = NanoGrid<ValueIndex || ValueOnIndex || ValueIndexMask || ValueOnIndexMask>
2054  template<typename T = BuildType>
2055  __hostdev__ typename util::enable_if<BuildTraits<T>::is_index, const uint64_t&>::type
2056  valueCount() const { return DataType::mData1; }
2057 
2058  /// @brief @brief Return the total number of points indexed by this PointGrid
2059  ///
2060  /// @note This method is only defined for PointGrid = NanoGrid<Point>
2061  template<typename T = BuildType>
2062  __hostdev__ typename util::enable_if<util::is_same<T, Point>::value, const uint64_t&>::type
2063  pointCount() const { return DataType::mData1; }
2064 
2065  /// @brief Return a const reference to the tree
2066  __hostdev__ const TreeT& tree() const { return *reinterpret_cast<const TreeT*>(this->treePtr()); }
2067 
2068  /// @brief Return a non-const reference to the tree
2069  __hostdev__ TreeT& tree() { return *reinterpret_cast<TreeT*>(this->treePtr()); }
2070 
2071  /// @brief Return a new instance of a ReadAccessor used to access values in this grid
2072  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->tree().root()); }
2073 
2074  /// @brief Return a const reference to the size of a voxel in world units
2075  __hostdev__ const Vec3d& voxelSize() const { return DataType::mVoxelSize; }
2076 
2077  /// @brief Return a const reference to the Map for this grid
2078  __hostdev__ const Map& map() const { return DataType::mMap; }
2079 
2080  /// @brief world to index space transformation
2081  template<typename Vec3T>
2082  __hostdev__ Vec3T worldToIndex(const Vec3T& xyz) const { return this->applyInverseMap(xyz); }
2083 
2084  /// @brief index to world space transformation
2085  template<typename Vec3T>
2086  __hostdev__ Vec3T indexToWorld(const Vec3T& xyz) const { return this->applyMap(xyz); }
2087 
2088  /// @brief transformation from index space direction to world space direction
2089  /// @warning assumes dir to be normalized
2090  template<typename Vec3T>
2091  __hostdev__ Vec3T indexToWorldDir(const Vec3T& dir) const { return this->applyJacobian(dir); }
2092 
2093  /// @brief transformation from world space direction to index space direction
2094  /// @warning assumes dir to be normalized
2095  template<typename Vec3T>
2096  __hostdev__ Vec3T worldToIndexDir(const Vec3T& dir) const { return this->applyInverseJacobian(dir); }
2097 
2098  /// @brief transform the gradient from index space to world space.
2099  /// @details Applies the inverse jacobian transform map.
2100  template<typename Vec3T>
2101  __hostdev__ Vec3T indexToWorldGrad(const Vec3T& grad) const { return this->applyIJT(grad); }
2102 
2103  /// @brief world to index space transformation
2104  template<typename Vec3T>
2105  __hostdev__ Vec3T worldToIndexF(const Vec3T& xyz) const { return this->applyInverseMapF(xyz); }
2106 
2107  /// @brief index to world space transformation
2108  template<typename Vec3T>
2109  __hostdev__ Vec3T indexToWorldF(const Vec3T& xyz) const { return this->applyMapF(xyz); }
2110 
2111  /// @brief transformation from index space direction to world space direction
2112  /// @warning assumes dir to be normalized
2113  template<typename Vec3T>
2114  __hostdev__ Vec3T indexToWorldDirF(const Vec3T& dir) const { return this->applyJacobianF(dir); }
2115 
2116  /// @brief transformation from world space direction to index space direction
2117  /// @warning assumes dir to be normalized
2118  template<typename Vec3T>
2119  __hostdev__ Vec3T worldToIndexDirF(const Vec3T& dir) const { return this->applyInverseJacobianF(dir); }
2120 
2121  /// @brief Transforms the gradient from index space to world space.
2122  /// @details Applies the inverse jacobian transform map.
2123  template<typename Vec3T>
2124  __hostdev__ Vec3T indexToWorldGradF(const Vec3T& grad) const { return DataType::applyIJTF(grad); }
2125 
2126  /// @brief Computes a AABB of active values in world space
2127  //__hostdev__ const Vec3dBBox& worldBBox() const { return DataType::mWorldBBox; }
2128 
2129  /// @brief Computes a AABB of active values in index space
2130  ///
2131  /// @note This method is returning a floating point bounding box and not a CoordBBox. This makes
2132  /// it more useful for clipping rays.
2133  //__hostdev__ const BBox<CoordType>& indexBBox() const { return this->tree().bbox(); }
2134 
2135  /// @brief Return the total number of active voxels in this tree.
2136  __hostdev__ uint64_t activeVoxelCount() const { return this->tree().activeVoxelCount(); }
2137 
2138  /// @brief Methods related to the classification of this grid
2139  __hostdev__ bool isValid() const { return DataType::isValid(); }
2140  __hostdev__ const GridType& gridType() const { return DataType::mGridType; }
2141  __hostdev__ const GridClass& gridClass() const { return DataType::mGridClass; }
2142  __hostdev__ bool isLevelSet() const { return DataType::mGridClass == GridClass::LevelSet; }
2143  __hostdev__ bool isFogVolume() const { return DataType::mGridClass == GridClass::FogVolume; }
2144  __hostdev__ bool isStaggered() const { return DataType::mGridClass == GridClass::Staggered; }
2145  __hostdev__ bool isPointIndex() const { return DataType::mGridClass == GridClass::PointIndex; }
2146  __hostdev__ bool isGridIndex() const { return DataType::mGridClass == GridClass::IndexGrid; }
2147  __hostdev__ bool isPointData() const { return DataType::mGridClass == GridClass::PointData; }
2148  __hostdev__ bool isMask() const { return DataType::mGridClass == GridClass::Topology; }
2149  __hostdev__ bool isUnknown() const { return DataType::mGridClass == GridClass::Unknown; }
2150  __hostdev__ bool hasMinMax() const { return DataType::mFlags.isMaskOn(GridFlags::HasMinMax); }
2151  __hostdev__ bool hasBBox() const { return DataType::mFlags.isMaskOn(GridFlags::HasBBox); }
2152  __hostdev__ bool hasLongGridName() const { return DataType::mFlags.isMaskOn(GridFlags::HasLongGridName); }
2153  __hostdev__ bool hasAverage() const { return DataType::mFlags.isMaskOn(GridFlags::HasAverage); }
2154  __hostdev__ bool hasStdDeviation() const { return DataType::mFlags.isMaskOn(GridFlags::HasStdDeviation); }
2155  __hostdev__ bool isBreadthFirst() const { return DataType::mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
2156 
2157  /// @brief return true if the specified node type is layed out breadth-first in memory and has a fixed size.
2158  /// This allows for sequential access to the nodes.
2159  template<typename NodeT>
2160  __hostdev__ bool isSequential() const { return NodeT::FIXED_SIZE && this->isBreadthFirst(); }
2161 
2162  /// @brief return true if the specified node level is layed out breadth-first in memory and has a fixed size.
2163  /// This allows for sequential access to the nodes.
2164  template<int LEVEL>
2165  __hostdev__ bool isSequential() const { return NodeTrait<TreeT, LEVEL>::type::FIXED_SIZE && this->isBreadthFirst(); }
2166 
2167  /// @brief return true if nodes at all levels can safely be accessed with simple linear offsets
2168  __hostdev__ bool isSequential() const { return UpperNodeType::FIXED_SIZE && LowerNodeType::FIXED_SIZE && LeafNodeType::FIXED_SIZE && this->isBreadthFirst(); }
2169 
2170  /// @brief Return a c-string with the name of this grid
2171  __hostdev__ const char* gridName() const { return DataType::gridName(); }
2172 
2173  /// @brief Return a c-string with the name of this grid, truncated to 255 characters
2174  __hostdev__ const char* shortGridName() const { return DataType::mGridName; }
2175 
2176  /// @brief Return checksum of the grid buffer.
2177  __hostdev__ const Checksum& checksum() const { return DataType::mChecksum; }
2178 
2179  /// @brief Return true if this grid is empty, i.e. contains no values or nodes.
2180  //__hostdev__ bool isEmpty() const { return this->tree().isEmpty(); }
2181 
2182  /// @brief Return the count of blind-data encoded in this grid
2183  __hostdev__ uint32_t blindDataCount() const { return DataType::mBlindMetadataCount; }
2184 
2185  /// @brief Return the index of the first blind data with specified name if found, otherwise -1.
2186  __hostdev__ int findBlindData(const char* name) const;
2187 
2188  /// @brief Return the index of the first blind data with specified semantic if found, otherwise -1.
2189  __hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const;
2190 
2191  /// @brief Returns a const pointer to the blindData at the specified linear offset.
2192  ///
2193  /// @warning Pointer might be NULL and the linear offset is assumed to be in the valid range
2194  // this method is deprecated !!!!
2195  [[deprecated("Use Grid::getBlindData<T>() instead.")]]
2196  __hostdev__ const void* blindData(uint32_t n) const
2197  {
2198  printf("\nnanovdb::Grid::blindData is unsafe and hence deprecated! Please use nanovdb::Grid::getBlindData instead.\n\n");
2199  NANOVDB_ASSERT(n < DataType::mBlindMetadataCount);
2200  return this->blindMetaData(n).blindData();
2201  }
2202 
2203  template <typename BlindDataT>
2204  __hostdev__ const BlindDataT* getBlindData(uint32_t n) const
2205  {
2206  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2207  return this->blindMetaData(n).template getBlindData<BlindDataT>();// NULL if mismatching BlindDataT
2208  }
2209 
2210  template <typename BlindDataT>
2211  __hostdev__ BlindDataT* getBlindData(uint32_t n)
2212  {
2213  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2214  return const_cast<BlindDataT*>(this->blindMetaData(n).template getBlindData<BlindDataT>());// NULL if mismatching BlindDataT
2215  }
2216 
2217  __hostdev__ const GridBlindMetaData& blindMetaData(uint32_t n) const { return *DataType::blindMetaData(n); }
2218 
2219 private:
2220  static_assert(sizeof(GridData) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(GridData) is misaligned");
2221 }; // Class Grid
2222 
2223 template<typename TreeT>
2225 {
2226  for (uint32_t i = 0, n = this->blindDataCount(); i < n; ++i) {
2227  if (this->blindMetaData(i).mSemantic == semantic)
2228  return int(i);
2229  }
2230  return -1;
2231 }
2232 
2233 template<typename TreeT>
2234 __hostdev__ int Grid<TreeT>::findBlindData(const char* name) const
2235 {
2236  auto test = [&](int n) {
2237  const char* str = this->blindMetaData(n).mName;
2238  for (int i = 0; i < GridBlindMetaData::MaxNameSize; ++i) {
2239  if (name[i] != str[i])
2240  return false;
2241  if (name[i] == '\0' && str[i] == '\0')
2242  return true;
2243  }
2244  return true; // all len characters matched
2245  };
2246  for (int i = 0, n = this->blindDataCount(); i < n; ++i)
2247  if (test(i))
2248  return i;
2249  return -1;
2250 }
2251 
2252 // ----------------------------> Tree <--------------------------------------
2253 
2254 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) TreeData
2255 { // sizeof(TreeData) == 64B
2256  int64_t mNodeOffset[4];// 32B, byte offset from this tree to first leaf, lower, upper and root node. If mNodeCount[N]=0 => mNodeOffset[N]==mNodeOffset[N+1]
2257  uint32_t mNodeCount[3]; // 12B, total number of nodes of type: leaf, lower internal, upper internal
2258  uint32_t mTileCount[3]; // 12B, total number of active tile values at the lower internal, upper internal and root node levels
2259  uint64_t mVoxelCount; // 8B, total number of active voxels in the root and all its child nodes.
2260  // No padding since it's always 32B aligned
2261  //__hostdev__ TreeData& operator=(const TreeData& other){return *util::memcpy(this, &other);}
2262  TreeData& operator=(const TreeData&) = default;
2263  __hostdev__ void setRoot(const void* root) {
2264  NANOVDB_ASSERT(root);
2265  mNodeOffset[3] = util::PtrDiff(root, this);
2266  }
2267 
2268  /// @brief Get a non-const void pointer to the root node (never NULL)
2269  __hostdev__ void* getRoot() { return util::PtrAdd(this, mNodeOffset[3]); }
2270 
2271  /// @brief Get a const void pointer to the root node (never NULL)
2272  __hostdev__ const void* getRoot() const { return util::PtrAdd(this, mNodeOffset[3]); }
2273 
2274  template<typename NodeT>
2275  __hostdev__ void setFirstNode(const NodeT* node) {mNodeOffset[NodeT::LEVEL] = (node ? util::PtrDiff(node, this) : 0);}
2276 
2277  /// @brief Return true if the root is empty, i.e. has not child nodes or constant tiles
2278  __hostdev__ bool isEmpty() const {return mNodeOffset[3] ? *util::PtrAdd<uint32_t>(this, mNodeOffset[3] + sizeof(CoordBBox)) == 0 : true;}
2279 
2280  /// @brief Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2281  __hostdev__ CoordBBox bbox() const {return mNodeOffset[3] ? *util::PtrAdd<CoordBBox>(this, mNodeOffset[3]) : CoordBBox();}
2282 
2283  /// @brief return true if RootData is layout out immediately after TreeData in memory
2284  __hostdev__ bool isRootNext() const {return mNodeOffset[3] ? mNodeOffset[3] == sizeof(TreeData) : false; }
2285 };// TreeData
2286 
2287 // ----------------------------> GridTree <--------------------------------------
2288 
2289 /// @brief defines a tree type from a grid type while preserving constness
2290 template<typename GridT>
2291 struct GridTree
2292 {
2293  using Type = typename GridT::TreeType;
2294  using type = typename GridT::TreeType;
2295 };
2296 template<typename GridT>
2297 struct GridTree<const GridT>
2298 {
2299  using Type = const typename GridT::TreeType;
2300  using type = const typename GridT::TreeType;
2301 };
2302 
2303 // ----------------------------> Tree <--------------------------------------
2304 
2305 /// @brief VDB Tree, which is a thin wrapper around a RootNode.
2306 template<typename RootT>
2307 class Tree : public TreeData
2308 {
2309  static_assert(RootT::LEVEL == 3, "Tree depth is not supported");
2310  static_assert(RootT::ChildNodeType::LOG2DIM == 5, "Tree configuration is not supported");
2311  static_assert(RootT::ChildNodeType::ChildNodeType::LOG2DIM == 4, "Tree configuration is not supported");
2312  static_assert(RootT::LeafNodeType::LOG2DIM == 3, "Tree configuration is not supported");
2313 
2314 public:
2316  using RootType = RootT;
2317  using RootNodeType = RootT;
2318  using UpperNodeType = typename RootNodeType::ChildNodeType;
2319  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2320  using LeafNodeType = typename RootType::LeafNodeType;
2321  using ValueType = typename RootT::ValueType;
2322  using BuildType = typename RootT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2323  using CoordType = typename RootT::CoordType;
2325 
2326  using Node3 = RootT;
2327  using Node2 = typename RootT::ChildNodeType;
2328  using Node1 = typename Node2::ChildNodeType;
2330 
2331  /// @brief This class cannot be constructed or deleted
2332  Tree() = delete;
2333  Tree(const Tree&) = delete;
2334  Tree& operator=(const Tree&) = delete;
2335  ~Tree() = delete;
2336 
2337  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2338 
2339  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2340 
2341  /// @brief return memory usage in bytes for the class
2342  __hostdev__ static uint64_t memUsage() { return sizeof(DataType); }
2343 
2344  __hostdev__ RootT& root() {return *reinterpret_cast<RootT*>(DataType::getRoot());}
2345 
2346  __hostdev__ const RootT& root() const {return *reinterpret_cast<const RootT*>(DataType::getRoot());}
2347 
2348  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->root()); }
2349 
2350  /// @brief Return the value of the given voxel (regardless of state or location in the tree.)
2351  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->root().getValue(ijk); }
2352  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->root().getValue(CoordType(i, j, k)); }
2353 
2354  /// @brief Return the active state of the given voxel (regardless of state or location in the tree.)
2355  __hostdev__ bool isActive(const CoordType& ijk) const { return this->root().isActive(ijk); }
2356 
2357  /// @brief Return true if this tree is empty, i.e. contains no values or nodes
2358  //__hostdev__ bool isEmpty() const { return this->root().isEmpty(); }
2359 
2360  /// @brief Combines the previous two methods in a single call
2361  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->root().probeValue(ijk, v); }
2362 
2363  /// @brief Return a const reference to the background value.
2364  __hostdev__ const ValueType& background() const { return this->root().background(); }
2365 
2366  /// @brief Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree
2367  __hostdev__ void extrema(ValueType& min, ValueType& max) const;
2368 
2369  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2370  //__hostdev__ const BBox<CoordType>& bbox() const { return this->root().bbox(); }
2371 
2372  /// @brief Return the total number of active voxels in this tree.
2373  __hostdev__ uint64_t activeVoxelCount() const { return DataType::mVoxelCount; }
2374 
2375  /// @brief Return the total number of active tiles at the specified level of the tree.
2376  ///
2377  /// @details level = 1,2,3 corresponds to active tile count in lower internal nodes, upper
2378  /// internal nodes, and the root level. Note active values at the leaf level are
2379  /// referred to as active voxels (see activeVoxelCount defined above).
2380  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const
2381  {
2382  NANOVDB_ASSERT(level > 0 && level <= 3); // 1, 2, or 3
2383  return DataType::mTileCount[level - 1];
2384  }
2385 
2386  template<typename NodeT>
2387  __hostdev__ uint32_t nodeCount() const
2388  {
2389  static_assert(NodeT::LEVEL < 3, "Invalid NodeT");
2390  return DataType::mNodeCount[NodeT::LEVEL];
2391  }
2392 
2393  __hostdev__ uint32_t nodeCount(int level) const
2394  {
2395  NANOVDB_ASSERT(level < 3);
2396  return DataType::mNodeCount[level];
2397  }
2398 
2399  __hostdev__ uint32_t totalNodeCount() const
2400  {
2401  return DataType::mNodeCount[0] + DataType::mNodeCount[1] + DataType::mNodeCount[2];
2402  }
2403 
2404  /// @brief return a pointer to the first node of the specified type
2405  ///
2406  /// @warning Note it may return NULL if no nodes exist
2407  template<typename NodeT>
2409  {
2410  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2411  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2412  }
2413 
2414  /// @brief return a const pointer to the first node of the specified type
2415  ///
2416  /// @warning Note it may return NULL if no nodes exist
2417  template<typename NodeT>
2418  __hostdev__ const NodeT* getFirstNode() const
2419  {
2420  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2421  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2422  }
2423 
2424  /// @brief return a pointer to the first node at the specified level
2425  ///
2426  /// @warning Note it may return NULL if no nodes exist
2427  template<int LEVEL>
2429  {
2430  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2431  }
2432 
2433  /// @brief return a const pointer to the first node of the specified level
2434  ///
2435  /// @warning Note it may return NULL if no nodes exist
2436  template<int LEVEL>
2438  {
2439  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2440  }
2441 
2442  /// @brief Template specializations of getFirstNode
2443  __hostdev__ LeafNodeType* getFirstLeaf() { return this->getFirstNode<LeafNodeType>(); }
2444  __hostdev__ const LeafNodeType* getFirstLeaf() const { return this->getFirstNode<LeafNodeType>(); }
2445  __hostdev__ typename NodeTrait<RootT, 1>::type* getFirstLower() { return this->getFirstNode<1>(); }
2446  __hostdev__ const typename NodeTrait<RootT, 1>::type* getFirstLower() const { return this->getFirstNode<1>(); }
2447  __hostdev__ typename NodeTrait<RootT, 2>::type* getFirstUpper() { return this->getFirstNode<2>(); }
2448  __hostdev__ const typename NodeTrait<RootT, 2>::type* getFirstUpper() const { return this->getFirstNode<2>(); }
2449 
2450  template<typename OpT, typename... ArgsT>
2451  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
2452  {
2453  return this->root().template get<OpT>(ijk, args...);
2454  }
2455 
2456  template<typename OpT, typename... ArgsT>
2457  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
2458  {
2459  return this->root().template set<OpT>(ijk, args...);
2460  }
2461 
2462 private:
2463  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(TreeData) is misaligned");
2464 
2465 }; // Tree class
2466 
2467 template<typename RootT>
2469 {
2470  min = this->root().minimum();
2471  max = this->root().maximum();
2472 }
2473 
2474 // --------------------------> RootData <------------------------------------
2475 
2476 /// @brief Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode)
2477 ///
2478 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
2479 template<typename ChildT>
2480 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) RootData
2481 {
2482  using ValueT = typename ChildT::ValueType;
2483  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2484  using CoordT = typename ChildT::CoordType;
2485  using StatsT = typename ChildT::FloatType;
2486  static constexpr bool FIXED_SIZE = false;
2487 
2488  /// @brief Return a key based on the coordinates of a voxel
2489 #ifdef NANOVDB_USE_SINGLE_ROOT_KEY
2490  using KeyT = uint64_t;
2491  template<typename CoordType>
2492  __hostdev__ static KeyT CoordToKey(const CoordType& ijk)
2493  {
2494  static_assert(sizeof(CoordT) == sizeof(CoordType), "Mismatching sizeof");
2495  static_assert(32 - ChildT::TOTAL <= 21, "Cannot use 64 bit root keys");
2496  return (KeyT(uint32_t(ijk[2]) >> ChildT::TOTAL)) | // z is the lower 21 bits
2497  (KeyT(uint32_t(ijk[1]) >> ChildT::TOTAL) << 21) | // y is the middle 21 bits
2498  (KeyT(uint32_t(ijk[0]) >> ChildT::TOTAL) << 42); // x is the upper 21 bits
2499  }
2500  __hostdev__ static CoordT KeyToCoord(const KeyT& key)
2501  {
2502  static constexpr uint64_t MASK = (1u << 21) - 1; // used to mask out 21 lower bits
2503  return CoordT(((key >> 42) & MASK) << ChildT::TOTAL, // x are the upper 21 bits
2504  ((key >> 21) & MASK) << ChildT::TOTAL, // y are the middle 21 bits
2505  (key & MASK) << ChildT::TOTAL); // z are the lower 21 bits
2506  }
2507 #else
2508  using KeyT = CoordT;
2509  __hostdev__ static KeyT CoordToKey(const CoordT& ijk) { return ijk & ~ChildT::MASK; }
2510  __hostdev__ static CoordT KeyToCoord(const KeyT& key) { return key; }
2511 #endif
2512  math::BBox<CoordT> mBBox; // 24B. AABB of active values in index space.
2513  uint32_t mTableSize; // 4B. number of tiles and child pointers in the root node
2514 
2515  ValueT mBackground; // background value, i.e. value of any unset voxel
2516  ValueT mMinimum; // typically 4B, minimum of all the active values
2517  ValueT mMaximum; // typically 4B, maximum of all the active values
2518  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
2519  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
2520 
2521  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
2522  ///
2523  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
2524  __hostdev__ static constexpr uint32_t padding()
2525  {
2526  return sizeof(RootData) - (24 + 4 + 3 * sizeof(ValueT) + 2 * sizeof(StatsT));
2527  }
2528 
2529  struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) Tile
2530  {
2531  template<typename CoordType>
2532  __hostdev__ void setChild(const CoordType& k, const void* ptr, const RootData* data)
2533  {
2534  key = CoordToKey(k);
2535  state = false;
2536  child = util::PtrDiff(ptr, data);
2537  }
2538  template<typename CoordType, typename ValueType>
2539  __hostdev__ void setValue(const CoordType& k, bool s, const ValueType& v)
2540  {
2541  key = CoordToKey(k);
2542  state = s;
2543  value = v;
2544  child = 0;
2545  }
2546  __hostdev__ bool isChild() const { return child != 0; }
2547  __hostdev__ bool isValue() const { return child == 0; }
2548  __hostdev__ bool isActive() const { return child == 0 && state; }
2549  __hostdev__ CoordT origin() const { return KeyToCoord(key); }
2550  KeyT key; // NANOVDB_USE_SINGLE_ROOT_KEY ? 8B : 12B
2551  int64_t child; // 8B. signed byte offset from this node to the child node. 0 means it is a constant tile, so use value.
2552  uint32_t state; // 4B. state of tile value
2553  ValueT value; // value of tile (i.e. no child node)
2554  }; // Tile
2555 
2556  /// @brief Returns a non-const reference to the tile at the specified linear offset.
2557  ///
2558  /// @warning The linear offset is assumed to be in the valid range
2559  __hostdev__ const Tile* tile(uint32_t n) const
2560  {
2561  NANOVDB_ASSERT(n < mTableSize);
2562  return reinterpret_cast<const Tile*>(this + 1) + n;
2563  }
2564  __hostdev__ Tile* tile(uint32_t n)
2565  {
2566  NANOVDB_ASSERT(n < mTableSize);
2567  return reinterpret_cast<Tile*>(this + 1) + n;
2568  }
2569 
2571  {
2572 #if 1 // switch between linear and binary seach
2573  const auto key = CoordToKey(ijk);
2574  for (Tile *p = reinterpret_cast<Tile*>(this + 1), *q = p + mTableSize; p < q; ++p)
2575  if (p->key == key)
2576  return p;
2577  return nullptr;
2578 #else // do not enable binary search if tiles are not guaranteed to be sorted!!!!!!
2579  int32_t low = 0, high = mTableSize; // low is inclusive and high is exclusive
2580  while (low != high) {
2581  int mid = low + ((high - low) >> 1);
2582  const Tile* tile = &tiles[mid];
2583  if (tile->key == key) {
2584  return tile;
2585  } else if (tile->key < key) {
2586  low = mid + 1;
2587  } else {
2588  high = mid;
2589  }
2590  }
2591  return nullptr;
2592 #endif
2593  }
2594 
2595  __hostdev__ inline const Tile* probeTile(const CoordT& ijk) const
2596  {
2597  return const_cast<RootData*>(this)->probeTile(ijk);
2598  }
2599 
2600  /// @brief Returns a const reference to the child node in the specified tile.
2601  ///
2602  /// @warning A child node is assumed to exist in the specified tile
2603  __hostdev__ ChildT* getChild(const Tile* tile)
2604  {
2605  NANOVDB_ASSERT(tile->child);
2606  return util::PtrAdd<ChildT>(this, tile->child);
2607  }
2608  __hostdev__ const ChildT* getChild(const Tile* tile) const
2609  {
2610  NANOVDB_ASSERT(tile->child);
2611  return util::PtrAdd<ChildT>(this, tile->child);
2612  }
2613 
2614  __hostdev__ const ValueT& getMin() const { return mMinimum; }
2615  __hostdev__ const ValueT& getMax() const { return mMaximum; }
2616  __hostdev__ const StatsT& average() const { return mAverage; }
2617  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
2618 
2619  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
2620  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
2621  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
2622  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
2623 
2624  /// @brief This class cannot be constructed or deleted
2625  RootData() = delete;
2626  RootData(const RootData&) = delete;
2627  RootData& operator=(const RootData&) = delete;
2628  ~RootData() = delete;
2629 }; // RootData
2630 
2631 // --------------------------> RootNode <------------------------------------
2632 
2633 /// @brief Top-most node of the VDB tree structure.
2634 template<typename ChildT>
2635 class RootNode : public RootData<ChildT>
2636 {
2637 public:
2639  using ChildNodeType = ChildT;
2640  using RootType = RootNode<ChildT>; // this allows RootNode to behave like a Tree
2642  using UpperNodeType = ChildT;
2643  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2644  using LeafNodeType = typename ChildT::LeafNodeType;
2645  using ValueType = typename DataType::ValueT;
2646  using FloatType = typename DataType::StatsT;
2647  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2648 
2649  using CoordType = typename ChildT::CoordType;
2650  using BBoxType = math::BBox<CoordType>;
2652  using Tile = typename DataType::Tile;
2653  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
2654 
2655  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
2656 
2657  template<typename RootT>
2658  class BaseIter
2659  {
2660  protected:
2664  uint32_t mPos, mSize;
2665  __hostdev__ BaseIter(DataT* data = nullptr, uint32_t n = 0)
2666  : mData(data)
2667  , mPos(0)
2668  , mSize(n)
2669  {
2670  }
2671 
2672  public:
2673  __hostdev__ operator bool() const { return mPos < mSize; }
2674  __hostdev__ uint32_t pos() const { return mPos; }
2675  __hostdev__ void next() { ++mPos; }
2676  __hostdev__ TileT* tile() const { return mData->tile(mPos); }
2678  {
2679  NANOVDB_ASSERT(*this);
2680  return this->tile()->origin();
2681  }
2683  {
2684  NANOVDB_ASSERT(*this);
2685  return this->tile()->origin();
2686  }
2687  }; // Member class BaseIter
2688 
2689  template<typename RootT>
2690  class ChildIter : public BaseIter<RootT>
2691  {
2692  static_assert(util::is_same<typename util::remove_const<RootT>::type, RootNode>::value, "Invalid RootT");
2693  using BaseT = BaseIter<RootT>;
2694  using NodeT = typename util::match_const<ChildT, RootT>::type;
2695 
2696  public:
2698  : BaseT()
2699  {
2700  }
2701  __hostdev__ ChildIter(RootT* parent)
2702  : BaseT(parent->data(), parent->tileCount())
2703  {
2704  NANOVDB_ASSERT(BaseT::mData);
2705  while (*this && !this->tile()->isChild())
2706  this->next();
2707  }
2708  __hostdev__ NodeT& operator*() const
2709  {
2710  NANOVDB_ASSERT(*this);
2711  return *BaseT::mData->getChild(this->tile());
2712  }
2713  __hostdev__ NodeT* operator->() const
2714  {
2715  NANOVDB_ASSERT(*this);
2716  return BaseT::mData->getChild(this->tile());
2717  }
2719  {
2720  NANOVDB_ASSERT(BaseT::mData);
2721  this->next();
2722  while (*this && this->tile()->isValue())
2723  this->next();
2724  return *this;
2725  }
2727  {
2728  auto tmp = *this;
2729  ++(*this);
2730  return tmp;
2731  }
2732  }; // Member class ChildIter
2733 
2736 
2739 
2740  template<typename RootT>
2741  class ValueIter : public BaseIter<RootT>
2742  {
2743  using BaseT = BaseIter<RootT>;
2744 
2745  public:
2747  : BaseT()
2748  {
2749  }
2750  __hostdev__ ValueIter(RootT* parent)
2751  : BaseT(parent->data(), parent->tileCount())
2752  {
2753  NANOVDB_ASSERT(BaseT::mData);
2754  while (*this && this->tile()->isChild())
2755  this->next();
2756  }
2758  {
2759  NANOVDB_ASSERT(*this);
2760  return this->tile()->value;
2761  }
2762  __hostdev__ bool isActive() const
2763  {
2764  NANOVDB_ASSERT(*this);
2765  return this->tile()->state;
2766  }
2768  {
2769  NANOVDB_ASSERT(BaseT::mData);
2770  this->next();
2771  while (*this && this->tile()->isChild())
2772  this->next();
2773  return *this;
2774  }
2776  {
2777  auto tmp = *this;
2778  ++(*this);
2779  return tmp;
2780  }
2781  }; // Member class ValueIter
2782 
2785 
2788 
2789  template<typename RootT>
2790  class ValueOnIter : public BaseIter<RootT>
2791  {
2792  using BaseT = BaseIter<RootT>;
2793 
2794  public:
2796  : BaseT()
2797  {
2798  }
2799  __hostdev__ ValueOnIter(RootT* parent)
2800  : BaseT(parent->data(), parent->tileCount())
2801  {
2802  NANOVDB_ASSERT(BaseT::mData);
2803  while (*this && !this->tile()->isActive())
2804  ++BaseT::mPos;
2805  }
2807  {
2808  NANOVDB_ASSERT(*this);
2809  return this->tile()->value;
2810  }
2812  {
2813  NANOVDB_ASSERT(BaseT::mData);
2814  this->next();
2815  while (*this && !this->tile()->isActive())
2816  this->next();
2817  return *this;
2818  }
2820  {
2821  auto tmp = *this;
2822  ++(*this);
2823  return tmp;
2824  }
2825  }; // Member class ValueOnIter
2826 
2829 
2832 
2833  template<typename RootT>
2834  class DenseIter : public BaseIter<RootT>
2835  {
2836  using BaseT = BaseIter<RootT>;
2837  using NodeT = typename util::match_const<ChildT, RootT>::type;
2838 
2839  public:
2841  : BaseT()
2842  {
2843  }
2844  __hostdev__ DenseIter(RootT* parent)
2845  : BaseT(parent->data(), parent->tileCount())
2846  {
2847  NANOVDB_ASSERT(BaseT::mData);
2848  }
2849  __hostdev__ NodeT* probeChild(ValueType& value) const
2850  {
2851  NANOVDB_ASSERT(*this);
2852  NodeT* child = nullptr;
2853  auto* t = this->tile();
2854  if (t->isChild()) {
2855  child = BaseT::mData->getChild(t);
2856  } else {
2857  value = t->value;
2858  }
2859  return child;
2860  }
2861  __hostdev__ bool isValueOn() const
2862  {
2863  NANOVDB_ASSERT(*this);
2864  return this->tile()->state;
2865  }
2867  {
2868  NANOVDB_ASSERT(BaseT::mData);
2869  this->next();
2870  return *this;
2871  }
2873  {
2874  auto tmp = *this;
2875  ++(*this);
2876  return tmp;
2877  }
2878  }; // Member class DenseIter
2879 
2882 
2886 
2887  /// @brief This class cannot be constructed or deleted
2888  RootNode() = delete;
2889  RootNode(const RootNode&) = delete;
2890  RootNode& operator=(const RootNode&) = delete;
2891  ~RootNode() = delete;
2892 
2894 
2895  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2896 
2897  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2898 
2899  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2900  __hostdev__ const BBoxType& bbox() const { return DataType::mBBox; }
2901 
2902  /// @brief Return the total number of active voxels in the root and all its child nodes.
2903 
2904  /// @brief Return a const reference to the background value, i.e. the value associated with
2905  /// any coordinate location that has not been set explicitly.
2906  __hostdev__ const ValueType& background() const { return DataType::mBackground; }
2907 
2908  /// @brief Return the number of tiles encoded in this root node
2909  __hostdev__ const uint32_t& tileCount() const { return DataType::mTableSize; }
2910  __hostdev__ const uint32_t& getTableSize() const { return DataType::mTableSize; }
2911 
2912  /// @brief Return a const reference to the minimum active value encoded in this root node and any of its child nodes
2913  __hostdev__ const ValueType& minimum() const { return DataType::mMinimum; }
2914 
2915  /// @brief Return a const reference to the maximum active value encoded in this root node and any of its child nodes
2916  __hostdev__ const ValueType& maximum() const { return DataType::mMaximum; }
2917 
2918  /// @brief Return a const reference to the average of all the active values encoded in this root node and any of its child nodes
2919  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
2920 
2921  /// @brief Return the variance of all the active values encoded in this root node and any of its child nodes
2922  __hostdev__ FloatType variance() const { return math::Pow2(DataType::mStdDevi); }
2923 
2924  /// @brief Return a const reference to the standard deviation of all the active values encoded in this root node and any of its child nodes
2925  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
2926 
2927  /// @brief Return the expected memory footprint in bytes with the specified number of tiles
2928  __hostdev__ static uint64_t memUsage(uint32_t tableSize) { return sizeof(RootNode) + tableSize * sizeof(Tile); }
2929 
2930  /// @brief Return the actual memory footprint of this root node
2931  __hostdev__ uint64_t memUsage() const { return sizeof(RootNode) + DataType::mTableSize * sizeof(Tile); }
2932 
2933  /// @brief Return true if this RootNode is empty, i.e. contains no values or nodes
2934  __hostdev__ bool isEmpty() const { return DataType::mTableSize == uint32_t(0); }
2935 
2936 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
2937  /// @brief Return the value of the given voxel
2938  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
2939  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildType>>(CoordType(i, j, k)); }
2940  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
2941  /// @brief return the state and updates the value of the specified voxel
2942  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
2943  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
2944 #else // NANOVDB_NEW_ACCESSOR_METHODS
2945 
2946  /// @brief Return the value of the given voxel
2947  __hostdev__ ValueType getValue(const CoordType& ijk) const
2948  {
2949  if (const Tile* tile = DataType::probeTile(ijk)) {
2950  return tile->isChild() ? this->getChild(tile)->getValue(ijk) : tile->value;
2951  }
2952  return DataType::mBackground;
2953  }
2954  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->getValue(CoordType(i, j, k)); }
2955 
2956  __hostdev__ bool isActive(const CoordType& ijk) const
2957  {
2958  if (const Tile* tile = DataType::probeTile(ijk)) {
2959  return tile->isChild() ? this->getChild(tile)->isActive(ijk) : tile->state;
2960  }
2961  return false;
2962  }
2963 
2964  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
2965  {
2966  if (const Tile* tile = DataType::probeTile(ijk)) {
2967  if (tile->isChild()) {
2968  const auto* child = this->getChild(tile);
2969  return child->probeValue(ijk, v);
2970  }
2971  v = tile->value;
2972  return tile->state;
2973  }
2974  v = DataType::mBackground;
2975  return false;
2976  }
2977 
2978  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const
2979  {
2980  const Tile* tile = DataType::probeTile(ijk);
2981  if (tile && tile->isChild()) {
2982  const auto* child = this->getChild(tile);
2983  return child->probeLeaf(ijk);
2984  }
2985  return nullptr;
2986  }
2987 
2988 #endif // NANOVDB_NEW_ACCESSOR_METHODS
2989 
2991  {
2992  const Tile* tile = DataType::probeTile(ijk);
2993  return tile && tile->isChild() ? this->getChild(tile) : nullptr;
2994  }
2995 
2997  {
2998  const Tile* tile = DataType::probeTile(ijk);
2999  return tile && tile->isChild() ? this->getChild(tile) : nullptr;
3000  }
3001 
3002  template<typename OpT, typename... ArgsT>
3003  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
3004  {
3005  if (const Tile* tile = this->probeTile(ijk)) {
3006  if (tile->isChild())
3007  return this->getChild(tile)->template get<OpT>(ijk, args...);
3008  return OpT::get(*tile, args...);
3009  }
3010  return OpT::get(*this, args...);
3011  }
3012 
3013  template<typename OpT, typename... ArgsT>
3014  // __hostdev__ auto // occasionally fails with NVCC
3015  __hostdev__ decltype(OpT::set(util::declval<Tile&>(), util::declval<ArgsT>()...))
3016  set(const CoordType& ijk, ArgsT&&... args)
3017  {
3018  if (Tile* tile = DataType::probeTile(ijk)) {
3019  if (tile->isChild())
3020  return this->getChild(tile)->template set<OpT>(ijk, args...);
3021  return OpT::set(*tile, args...);
3022  }
3023  return OpT::set(*this, args...);
3024  }
3025 
3026 private:
3027  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData) is misaligned");
3028  static_assert(sizeof(typename DataType::Tile) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData::Tile) is misaligned");
3029 
3030  template<typename, int, int, int>
3031  friend class ReadAccessor;
3032 
3033  template<typename>
3034  friend class Tree;
3035 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
3036  /// @brief Private method to return node information and update a ReadAccessor
3037  template<typename AccT>
3038  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& ijk, const AccT& acc) const
3039  {
3040  using NodeInfoT = typename AccT::NodeInfo;
3041  if (const Tile* tile = this->probeTile(ijk)) {
3042  if (tile->isChild()) {
3043  const auto* child = this->getChild(tile);
3044  acc.insert(ijk, child);
3045  return child->getNodeInfoAndCache(ijk, acc);
3046  }
3047  return NodeInfoT{LEVEL, ChildT::dim(), tile->value, tile->value, tile->value, 0, tile->origin(), tile->origin() + CoordType(ChildT::DIM)};
3048  }
3049  return NodeInfoT{LEVEL, ChildT::dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
3050  }
3051 
3052  /// @brief Private method to return a voxel value and update a ReadAccessor
3053  template<typename AccT>
3054  __hostdev__ ValueType getValueAndCache(const CoordType& ijk, const AccT& acc) const
3055  {
3056  if (const Tile* tile = this->probeTile(ijk)) {
3057  if (tile->isChild()) {
3058  const auto* child = this->getChild(tile);
3059  acc.insert(ijk, child);
3060  return child->getValueAndCache(ijk, acc);
3061  }
3062  return tile->value;
3063  }
3064  return DataType::mBackground;
3065  }
3066 
3067  template<typename AccT>
3068  __hostdev__ bool isActiveAndCache(const CoordType& ijk, const AccT& acc) const
3069  {
3070  const Tile* tile = this->probeTile(ijk);
3071  if (tile && tile->isChild()) {
3072  const auto* child = this->getChild(tile);
3073  acc.insert(ijk, child);
3074  return child->isActiveAndCache(ijk, acc);
3075  }
3076  return false;
3077  }
3078 
3079  template<typename AccT>
3080  __hostdev__ bool probeValueAndCache(const CoordType& ijk, ValueType& v, const AccT& acc) const
3081  {
3082  if (const Tile* tile = this->probeTile(ijk)) {
3083  if (tile->isChild()) {
3084  const auto* child = this->getChild(tile);
3085  acc.insert(ijk, child);
3086  return child->probeValueAndCache(ijk, v, acc);
3087  }
3088  v = tile->value;
3089  return tile->state;
3090  }
3091  v = DataType::mBackground;
3092  return false;
3093  }
3094 
3095  template<typename AccT>
3096  __hostdev__ const LeafNodeType* probeLeafAndCache(const CoordType& ijk, const AccT& acc) const
3097  {
3098  const Tile* tile = this->probeTile(ijk);
3099  if (tile && tile->isChild()) {
3100  const auto* child = this->getChild(tile);
3101  acc.insert(ijk, child);
3102  return child->probeLeafAndCache(ijk, acc);
3103  }
3104  return nullptr;
3105  }
3106 #endif // NANOVDB_NEW_ACCESSOR_METHODS
3107 
3108  template<typename RayT, typename AccT>
3109  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3110  {
3111  if (const Tile* tile = this->probeTile(ijk)) {
3112  if (tile->isChild()) {
3113  const auto* child = this->getChild(tile);
3114  acc.insert(ijk, child);
3115  return child->getDimAndCache(ijk, ray, acc);
3116  }
3117  return 1 << ChildT::TOTAL; //tile value
3118  }
3119  return ChildNodeType::dim(); // background
3120  }
3121 
3122  template<typename OpT, typename AccT, typename... ArgsT>
3123  //__hostdev__ decltype(OpT::get(util::declval<const Tile&>(), util::declval<ArgsT>()...))
3124  __hostdev__ auto
3125  getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3126  {
3127  if (const Tile* tile = this->probeTile(ijk)) {
3128  if (tile->isChild()) {
3129  const ChildT* child = this->getChild(tile);
3130  acc.insert(ijk, child);
3131  return child->template getAndCache<OpT>(ijk, acc, args...);
3132  }
3133  return OpT::get(*tile, args...);
3134  }
3135  return OpT::get(*this, args...);
3136  }
3137 
3138  template<typename OpT, typename AccT, typename... ArgsT>
3139  // __hostdev__ auto // occasionally fails with NVCC
3140  __hostdev__ decltype(OpT::set(util::declval<Tile&>(), util::declval<ArgsT>()...))
3141  setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3142  {
3143  if (Tile* tile = DataType::probeTile(ijk)) {
3144  if (tile->isChild()) {
3145  ChildT* child = this->getChild(tile);
3146  acc.insert(ijk, child);
3147  return child->template setAndCache<OpT>(ijk, acc, args...);
3148  }
3149  return OpT::set(*tile, args...);
3150  }
3151  return OpT::set(*this, args...);
3152  }
3153 
3154 }; // RootNode class
3155 
3156 // After the RootNode the memory layout is assumed to be the sorted Tiles
3157 
3158 // --------------------------> InternalNode <------------------------------------
3159 
3160 /// @brief Struct with all the member data of the InternalNode (useful during serialization of an openvdb InternalNode)
3161 ///
3162 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3163 template<typename ChildT, uint32_t LOG2DIM>
3164 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) InternalData
3165 {
3166  using ValueT = typename ChildT::ValueType;
3167  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3168  using StatsT = typename ChildT::FloatType;
3169  using CoordT = typename ChildT::CoordType;
3170  using MaskT = typename ChildT::template MaskType<LOG2DIM>;
3171  static constexpr bool FIXED_SIZE = true;
3172 
3173  union Tile
3174  {
3176  int64_t child; //signed 64 bit byte offset relative to this InternalData, i.e. child-pointer = Tile::child + this
3177  /// @brief This class cannot be constructed or deleted
3178  Tile() = delete;
3179  Tile(const Tile&) = delete;
3180  Tile& operator=(const Tile&) = delete;
3181  ~Tile() = delete;
3182  };
3183 
3184  math::BBox<CoordT> mBBox; // 24B. node bounding box. |
3185  uint64_t mFlags; // 8B. node flags. | 32B aligned
3186  MaskT mValueMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3187  MaskT mChildMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3188 
3189  ValueT mMinimum; // typically 4B
3190  ValueT mMaximum; // typically 4B
3191  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
3192  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3193  // possible padding, e.g. 28 byte padding when ValueType = bool
3194 
3195  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3196  ///
3197  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3198  __hostdev__ static constexpr uint32_t padding()
3199  {
3200  return sizeof(InternalData) - (24u + 8u + 2 * (sizeof(MaskT) + sizeof(ValueT) + sizeof(StatsT)) + (1u << (3 * LOG2DIM)) * (sizeof(ValueT) > 8u ? sizeof(ValueT) : 8u));
3201  }
3202  alignas(32) Tile mTable[1u << (3 * LOG2DIM)]; // sizeof(ValueT) x (16*16*16 or 32*32*32)
3203 
3204  __hostdev__ static uint64_t memUsage() { return sizeof(InternalData); }
3205 
3206  __hostdev__ void setChild(uint32_t n, const void* ptr)
3207  {
3208  NANOVDB_ASSERT(mChildMask.isOn(n));
3209  mTable[n].child = util::PtrDiff(ptr, this);
3210  }
3211 
3212  template<typename ValueT>
3213  __hostdev__ void setValue(uint32_t n, const ValueT& v)
3214  {
3215  NANOVDB_ASSERT(!mChildMask.isOn(n));
3216  mTable[n].value = v;
3217  }
3218 
3219  /// @brief Returns a pointer to the child node at the specifed linear offset.
3220  __hostdev__ ChildT* getChild(uint32_t n)
3221  {
3222  NANOVDB_ASSERT(mChildMask.isOn(n));
3223  return util::PtrAdd<ChildT>(this, mTable[n].child);
3224  }
3225  __hostdev__ const ChildT* getChild(uint32_t n) const
3226  {
3227  NANOVDB_ASSERT(mChildMask.isOn(n));
3228  return util::PtrAdd<ChildT>(this, mTable[n].child);
3229  }
3230 
3231  __hostdev__ ValueT getValue(uint32_t n) const
3232  {
3233  NANOVDB_ASSERT(mChildMask.isOff(n));
3234  return mTable[n].value;
3235  }
3236 
3237  __hostdev__ bool isActive(uint32_t n) const
3238  {
3239  NANOVDB_ASSERT(mChildMask.isOff(n));
3240  return mValueMask.isOn(n);
3241  }
3242 
3243  __hostdev__ bool isChild(uint32_t n) const { return mChildMask.isOn(n); }
3244 
3245  template<typename T>
3246  __hostdev__ void setOrigin(const T& ijk) { mBBox[0] = ijk; }
3247 
3248  __hostdev__ const ValueT& getMin() const { return mMinimum; }
3249  __hostdev__ const ValueT& getMax() const { return mMaximum; }
3250  __hostdev__ const StatsT& average() const { return mAverage; }
3251  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
3252 
3253 #if defined(__GNUC__) && !defined(__APPLE__) && !defined(__llvm__)
3254 #pragma GCC diagnostic push
3255 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3256 #endif
3257  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
3258  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
3259  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
3260  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
3261 #if defined(__GNUC__) && !defined(__APPLE__) && !defined(__llvm__)
3262 #pragma GCC diagnostic pop
3263 #endif
3264 
3265  /// @brief This class cannot be constructed or deleted
3266  InternalData() = delete;
3267  InternalData(const InternalData&) = delete;
3268  InternalData& operator=(const InternalData&) = delete;
3269  ~InternalData() = delete;
3270 }; // InternalData
3271 
3272 /// @brief Internal nodes of a VDB tree
3273 template<typename ChildT, uint32_t Log2Dim = ChildT::LOG2DIM + 1>
3274 class InternalNode : public InternalData<ChildT, Log2Dim>
3275 {
3276 public:
3278  using ValueType = typename DataType::ValueT;
3279  using FloatType = typename DataType::StatsT;
3280  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3281  using LeafNodeType = typename ChildT::LeafNodeType;
3282  using ChildNodeType = ChildT;
3283  using CoordType = typename ChildT::CoordType;
3284  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
3285  template<uint32_t LOG2>
3286  using MaskType = typename ChildT::template MaskType<LOG2>;
3287  template<bool On>
3288  using MaskIterT = typename Mask<Log2Dim>::template Iterator<On>;
3289 
3290  static constexpr uint32_t LOG2DIM = Log2Dim;
3291  static constexpr uint32_t TOTAL = LOG2DIM + ChildT::TOTAL; // dimension in index space
3292  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
3293  static constexpr uint32_t SIZE = 1u << (3 * LOG2DIM); // number of tile values (or child pointers)
3294  static constexpr uint32_t MASK = (1u << TOTAL) - 1u;
3295  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
3296  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
3297 
3298  /// @brief Visits child nodes of this node only
3299  template <typename ParentT>
3300  class ChildIter : public MaskIterT<true>
3301  {
3302  static_assert(util::is_same<typename util::remove_const<ParentT>::type, InternalNode>::value, "Invalid ParentT");
3303  using BaseT = MaskIterT<true>;
3304  using NodeT = typename util::match_const<ChildT, ParentT>::type;
3305  ParentT* mParent;
3306 
3307  public:
3309  : BaseT()
3310  , mParent(nullptr)
3311  {
3312  }
3313  __hostdev__ ChildIter(ParentT* parent)
3314  : BaseT(parent->mChildMask.beginOn())
3315  , mParent(parent)
3316  {
3317  }
3318  ChildIter& operator=(const ChildIter&) = default;
3319  __hostdev__ NodeT& operator*() const
3320  {
3321  NANOVDB_ASSERT(*this);
3322  return *mParent->getChild(BaseT::pos());
3323  }
3324  __hostdev__ NodeT* operator->() const
3325  {
3326  NANOVDB_ASSERT(*this);
3327  return mParent->getChild(BaseT::pos());
3328  }
3330  {
3331  NANOVDB_ASSERT(*this);
3332  return (*this)->origin();
3333  }
3334  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3335  }; // Member class ChildIter
3336 
3339 
3342 
3343  /// @brief Visits all tile values in this node, i.e. both inactive and active tiles
3344  class ValueIterator : public MaskIterT<false>
3345  {
3346  using BaseT = MaskIterT<false>;
3347  const InternalNode* mParent;
3348 
3349  public:
3351  : BaseT()
3352  , mParent(nullptr)
3353  {
3354  }
3356  : BaseT(parent->data()->mChildMask.beginOff())
3357  , mParent(parent)
3358  {
3359  }
3360  ValueIterator& operator=(const ValueIterator&) = default;
3362  {
3363  NANOVDB_ASSERT(*this);
3364  return mParent->data()->getValue(BaseT::pos());
3365  }
3367  {
3368  NANOVDB_ASSERT(*this);
3369  return mParent->offsetToGlobalCoord(BaseT::pos());
3370  }
3371  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3372  __hostdev__ bool isActive() const
3373  {
3374  NANOVDB_ASSERT(*this);
3375  return mParent->data()->isActive(BaseT::mPos);
3376  }
3377  }; // Member class ValueIterator
3378 
3381 
3382  /// @brief Visits active tile values of this node only
3383  class ValueOnIterator : public MaskIterT<true>
3384  {
3385  using BaseT = MaskIterT<true>;
3386  const InternalNode* mParent;
3387 
3388  public:
3390  : BaseT()
3391  , mParent(nullptr)
3392  {
3393  }
3395  : BaseT(parent->data()->mValueMask.beginOn())
3396  , mParent(parent)
3397  {
3398  }
3399  ValueOnIterator& operator=(const ValueOnIterator&) = default;
3401  {
3402  NANOVDB_ASSERT(*this);
3403  return mParent->data()->getValue(BaseT::pos());
3404  }
3406  {
3407  NANOVDB_ASSERT(*this);
3408  return mParent->offsetToGlobalCoord(BaseT::pos());
3409  }
3410  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3411  }; // Member class ValueOnIterator
3412 
3415 
3416  /// @brief Visits all tile values and child nodes of this node
3417  class DenseIterator : public Mask<Log2Dim>::DenseIterator
3418  {
3419  using BaseT = typename Mask<Log2Dim>::DenseIterator;
3420  const DataType* mParent;
3421 
3422  public:
3424  : BaseT()
3425  , mParent(nullptr)
3426  {
3427  }
3429  : BaseT(0)
3430  , mParent(parent->data())
3431  {
3432  }
3433  DenseIterator& operator=(const DenseIterator&) = default;
3434  __hostdev__ const ChildT* probeChild(ValueType& value) const
3435  {
3436  NANOVDB_ASSERT(mParent && bool(*this));
3437  const ChildT* child = nullptr;
3438  if (mParent->mChildMask.isOn(BaseT::pos())) {
3439  child = mParent->getChild(BaseT::pos());
3440  } else {
3441  value = mParent->getValue(BaseT::pos());
3442  }
3443  return child;
3444  }
3445  __hostdev__ bool isValueOn() const
3446  {
3447  NANOVDB_ASSERT(mParent && bool(*this));
3448  return mParent->isActive(BaseT::pos());
3449  }
3451  {
3452  NANOVDB_ASSERT(mParent && bool(*this));
3453  return mParent->offsetToGlobalCoord(BaseT::pos());
3454  }
3455  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3456  }; // Member class DenseIterator
3457 
3459  __hostdev__ DenseIterator cbeginChildAll() const { return DenseIterator(this); } // matches openvdb
3460 
3461  /// @brief This class cannot be constructed or deleted
3462  InternalNode() = delete;
3463  InternalNode(const InternalNode&) = delete;
3464  InternalNode& operator=(const InternalNode&) = delete;
3465  ~InternalNode() = delete;
3466 
3467  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3468 
3469  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3470 
3471  /// @brief Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32)
3472  __hostdev__ static uint32_t dim() { return 1u << TOTAL; }
3473 
3474  /// @brief Return memory usage in bytes for the class
3475  __hostdev__ static size_t memUsage() { return DataType::memUsage(); }
3476 
3477  /// @brief Return a const reference to the bit mask of active voxels in this internal node
3478  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
3479  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
3480 
3481  /// @brief Return a const reference to the bit mask of child nodes in this internal node
3482  __hostdev__ const MaskType<LOG2DIM>& childMask() const { return DataType::mChildMask; }
3483  __hostdev__ const MaskType<LOG2DIM>& getChildMask() const { return DataType::mChildMask; }
3484 
3485  /// @brief Return the origin in index space of this leaf node
3486  __hostdev__ CoordType origin() const { return DataType::mBBox.min() & ~MASK; }
3487 
3488  /// @brief Return a const reference to the minimum active value encoded in this internal node and any of its child nodes
3489  __hostdev__ const ValueType& minimum() const { return this->getMin(); }
3490 
3491  /// @brief Return a const reference to the maximum active value encoded in this internal node and any of its child nodes
3492  __hostdev__ const ValueType& maximum() const { return this->getMax(); }
3493 
3494  /// @brief Return a const reference to the average of all the active values encoded in this internal node and any of its child nodes
3495  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3496 
3497  /// @brief Return the variance of all the active values encoded in this internal node and any of its child nodes
3498  __hostdev__ FloatType variance() const { return DataType::mStdDevi * DataType::mStdDevi; }
3499 
3500  /// @brief Return a const reference to the standard deviation of all the active values encoded in this internal node and any of its child nodes
3501  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3502 
3503  /// @brief Return a const reference to the bounding box in index space of active values in this internal node and any of its child nodes
3504  __hostdev__ const math::BBox<CoordType>& bbox() const { return DataType::mBBox; }
3505 
3506  /// @brief If the first entry in this node's table is a tile, return the tile's value.
3507  /// Otherwise, return the result of calling getFirstValue() on the child.
3509  {
3510  return DataType::mChildMask.isOn(0) ? this->getChild(0)->getFirstValue() : DataType::getValue(0);
3511  }
3512 
3513  /// @brief If the last entry in this node's table is a tile, return the tile's value.
3514  /// Otherwise, return the result of calling getLastValue() on the child.
3516  {
3517  return DataType::mChildMask.isOn(SIZE - 1) ? this->getChild(SIZE - 1)->getLastValue() : DataType::getValue(SIZE - 1);
3518  }
3519 
3520 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
3521  /// @brief Return the value of the given voxel
3522  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3523  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3524  /// @brief return the state and updates the value of the specified voxel
3525  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3526  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3527 #else // NANOVDB_NEW_ACCESSOR_METHODS
3528  __hostdev__ ValueType getValue(const CoordType& ijk) const
3529  {
3530  const uint32_t n = CoordToOffset(ijk);
3531  return DataType::mChildMask.isOn(n) ? this->getChild(n)->getValue(ijk) : DataType::getValue(n);
3532  }
3533  __hostdev__ bool isActive(const CoordType& ijk) const
3534  {
3535  const uint32_t n = CoordToOffset(ijk);
3536  return DataType::mChildMask.isOn(n) ? this->getChild(n)->isActive(ijk) : DataType::isActive(n);
3537  }
3538  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
3539  {
3540  const uint32_t n = CoordToOffset(ijk);
3541  if (DataType::mChildMask.isOn(n))
3542  return this->getChild(n)->probeValue(ijk, v);
3543  v = DataType::getValue(n);
3544  return DataType::isActive(n);
3545  }
3546  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const
3547  {
3548  const uint32_t n = CoordToOffset(ijk);
3549  if (DataType::mChildMask.isOn(n))
3550  return this->getChild(n)->probeLeaf(ijk);
3551  return nullptr;
3552  }
3553 
3554 #endif // NANOVDB_NEW_ACCESSOR_METHODS
3555 
3557  {
3558  const uint32_t n = CoordToOffset(ijk);
3559  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3560  }
3562  {
3563  const uint32_t n = CoordToOffset(ijk);
3564  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3565  }
3566 
3567  /// @brief Return the linear offset corresponding to the given coordinate
3568  __hostdev__ static uint32_t CoordToOffset(const CoordType& ijk)
3569  {
3570  return (((ijk[0] & MASK) >> ChildT::TOTAL) << (2 * LOG2DIM)) | // note, we're using bitwise OR instead of +
3571  (((ijk[1] & MASK) >> ChildT::TOTAL) << (LOG2DIM)) |
3572  ((ijk[2] & MASK) >> ChildT::TOTAL);
3573  }
3574 
3575  /// @return the local coordinate of the n'th tile or child node
3576  __hostdev__ static Coord OffsetToLocalCoord(uint32_t n)
3577  {
3578  NANOVDB_ASSERT(n < SIZE);
3579  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
3580  return Coord(n >> 2 * LOG2DIM, m >> LOG2DIM, m & ((1 << LOG2DIM) - 1));
3581  }
3582 
3583  /// @brief modifies local coordinates to global coordinates of a tile or child node
3584  __hostdev__ void localToGlobalCoord(Coord& ijk) const
3585  {
3586  ijk <<= ChildT::TOTAL;
3587  ijk += this->origin();
3588  }
3589 
3590  __hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
3591  {
3592  Coord ijk = InternalNode::OffsetToLocalCoord(n);
3593  this->localToGlobalCoord(ijk);
3594  return ijk;
3595  }
3596 
3597  /// @brief Return true if this node or any of its child nodes contain active values
3598  __hostdev__ bool isActive() const { return DataType::mFlags & uint32_t(2); }
3599 
3600  template<typename OpT, typename... ArgsT>
3601  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
3602  {
3603  const uint32_t n = CoordToOffset(ijk);
3604  if (this->isChild(n))
3605  return this->getChild(n)->template get<OpT>(ijk, args...);
3606  return OpT::get(*this, n, args...);
3607  }
3608 
3609  template<typename OpT, typename... ArgsT>
3610  //__hostdev__ auto // occasionally fails with NVCC
3611  __hostdev__ decltype(OpT::set(util::declval<InternalNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
3612  set(const CoordType& ijk, ArgsT&&... args)
3613  {
3614  const uint32_t n = CoordToOffset(ijk);
3615  if (this->isChild(n))
3616  return this->getChild(n)->template set<OpT>(ijk, args...);
3617  return OpT::set(*this, n, args...);
3618  }
3619 
3620 private:
3621  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(InternalData) is misaligned");
3622 
3623  template<typename, int, int, int>
3624  friend class ReadAccessor;
3625 
3626  template<typename>
3627  friend class RootNode;
3628  template<typename, uint32_t>
3629  friend class InternalNode;
3630 
3631 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
3632  /// @brief Private read access method used by the ReadAccessor
3633  template<typename AccT>
3634  __hostdev__ ValueType getValueAndCache(const CoordType& ijk, const AccT& acc) const
3635  {
3636  const uint32_t n = CoordToOffset(ijk);
3637  if (DataType::mChildMask.isOff(n))
3638  return DataType::getValue(n);
3639  const ChildT* child = this->getChild(n);
3640  acc.insert(ijk, child);
3641  return child->getValueAndCache(ijk, acc);
3642  }
3643  template<typename AccT>
3644  __hostdev__ bool isActiveAndCache(const CoordType& ijk, const AccT& acc) const
3645  {
3646  const uint32_t n = CoordToOffset(ijk);
3647  if (DataType::mChildMask.isOff(n))
3648  return DataType::isActive(n);
3649  const ChildT* child = this->getChild(n);
3650  acc.insert(ijk, child);
3651  return child->isActiveAndCache(ijk, acc);
3652  }
3653  template<typename AccT>
3654  __hostdev__ bool probeValueAndCache(const CoordType& ijk, ValueType& v, const AccT& acc) const
3655  {
3656  const uint32_t n = CoordToOffset(ijk);
3657  if (DataType::mChildMask.isOff(n)) {
3658  v = DataType::getValue(n);
3659  return DataType::isActive(n);
3660  }
3661  const ChildT* child = this->getChild(n);
3662  acc.insert(ijk, child);
3663  return child->probeValueAndCache(ijk, v, acc);
3664  }
3665  template<typename AccT>
3666  __hostdev__ const LeafNodeType* probeLeafAndCache(const CoordType& ijk, const AccT& acc) const
3667  {
3668  const uint32_t n = CoordToOffset(ijk);
3669  if (DataType::mChildMask.isOff(n))
3670  return nullptr;
3671  const ChildT* child = this->getChild(n);
3672  acc.insert(ijk, child);
3673  return child->probeLeafAndCache(ijk, acc);
3674  }
3675  template<typename AccT>
3676  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& ijk, const AccT& acc) const
3677  {
3678  using NodeInfoT = typename AccT::NodeInfo;
3679  const uint32_t n = CoordToOffset(ijk);
3680  if (DataType::mChildMask.isOff(n)) {
3681  return NodeInfoT{LEVEL, this->dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
3682  }
3683  const ChildT* child = this->getChild(n);
3684  acc.insert(ijk, child);
3685  return child->getNodeInfoAndCache(ijk, acc);
3686  }
3687 #endif // NANOVDB_NEW_ACCESSOR_METHODS
3688 
3689  template<typename RayT, typename AccT>
3690  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3691  {
3692  if (DataType::mFlags & uint32_t(1u))
3693  return this->dim(); // skip this node if the 1st bit is set
3694  //if (!ray.intersects( this->bbox() )) return 1<<TOTAL;
3695 
3696  const uint32_t n = CoordToOffset(ijk);
3697  if (DataType::mChildMask.isOn(n)) {
3698  const ChildT* child = this->getChild(n);
3699  acc.insert(ijk, child);
3700  return child->getDimAndCache(ijk, ray, acc);
3701  }
3702  return ChildNodeType::dim(); // tile value
3703  }
3704 
3705  template<typename OpT, typename AccT, typename... ArgsT>
3706  __hostdev__ auto
3707  //__hostdev__ decltype(OpT::get(util::declval<const InternalNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
3708  getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3709  {
3710  const uint32_t n = CoordToOffset(ijk);
3711  if (DataType::mChildMask.isOff(n))
3712  return OpT::get(*this, n, args...);
3713  const ChildT* child = this->getChild(n);
3714  acc.insert(ijk, child);
3715  return child->template getAndCache<OpT>(ijk, acc, args...);
3716  }
3717 
3718  template<typename OpT, typename AccT, typename... ArgsT>
3719  //__hostdev__ auto // occasionally fails with NVCC
3720  __hostdev__ decltype(OpT::set(util::declval<InternalNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
3721  setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3722  {
3723  const uint32_t n = CoordToOffset(ijk);
3724  if (DataType::mChildMask.isOff(n))
3725  return OpT::set(*this, n, args...);
3726  ChildT* child = this->getChild(n);
3727  acc.insert(ijk, child);
3728  return child->template setAndCache<OpT>(ijk, acc, args...);
3729  }
3730 
3731 }; // InternalNode class
3732 
3733 // --------------------------> LeafData<T> <------------------------------------
3734 
3735 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3736 ///
3737 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3738 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3739 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData
3740 {
3741  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3742  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3743  using ValueType = ValueT;
3744  using BuildType = ValueT;
3746  using ArrayType = ValueT; // type used for the internal mValue array
3747  static constexpr bool FIXED_SIZE = true;
3748 
3749  CoordT mBBoxMin; // 12B.
3750  uint8_t mBBoxDif[3]; // 3B.
3751  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3752  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3753 
3754  ValueType mMinimum; // typically 4B
3755  ValueType mMaximum; // typically 4B
3756  FloatType mAverage; // typically 4B, average of all the active values in this node and its child nodes
3757  FloatType mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3758  alignas(32) ValueType mValues[1u << 3 * LOG2DIM];
3759 
3760  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3761  ///
3762  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3763  __hostdev__ static constexpr uint32_t padding()
3764  {
3765  return sizeof(LeafData) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * (sizeof(ValueT) + sizeof(FloatType)) + (1u << (3 * LOG2DIM)) * sizeof(ValueT));
3766  }
3767  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3768 
3769  __hostdev__ static bool hasStats() { return true; }
3770 
3771  __hostdev__ ValueType getValue(uint32_t i) const { return mValues[i]; }
3772  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& value) { mValues[offset] = value; }
3773  __hostdev__ void setValue(uint32_t offset, const ValueType& value)
3774  {
3775  mValueMask.setOn(offset);
3776  mValues[offset] = value;
3777  }
3778  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3779 
3780  __hostdev__ ValueType getMin() const { return mMinimum; }
3781  __hostdev__ ValueType getMax() const { return mMaximum; }
3782  __hostdev__ FloatType getAvg() const { return mAverage; }
3783  __hostdev__ FloatType getDev() const { return mStdDevi; }
3784 
3785  __hostdev__ void setMin(const ValueType& v) { mMinimum = v; }
3786  __hostdev__ void setMax(const ValueType& v) { mMaximum = v; }
3787  __hostdev__ void setAvg(const FloatType& v) { mAverage = v; }
3788  __hostdev__ void setDev(const FloatType& v) { mStdDevi = v; }
3789 
3790  template<typename T>
3791  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3792 
3793  __hostdev__ void fill(const ValueType& v)
3794  {
3795  for (auto *p = mValues, *q = p + 512; p != q; ++p)
3796  *p = v;
3797  }
3798 
3799  /// @brief This class cannot be constructed or deleted
3800  LeafData() = delete;
3801  LeafData(const LeafData&) = delete;
3802  LeafData& operator=(const LeafData&) = delete;
3803  ~LeafData() = delete;
3804 }; // LeafData<ValueT>
3805 
3806 // --------------------------> LeafFnBase <------------------------------------
3807 
3808 /// @brief Base-class for quantized float leaf nodes
3809 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3810 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafFnBase
3811 {
3812  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3813  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3814  using ValueType = float;
3815  using FloatType = float;
3816 
3817  CoordT mBBoxMin; // 12B.
3818  uint8_t mBBoxDif[3]; // 3B.
3819  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3820  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3821 
3822  float mMinimum; // 4B - minimum of ALL values in this node
3823  float mQuantum; // = (max - min)/15 4B
3824  uint16_t mMin, mMax, mAvg, mDev; // quantized representations of statistics of active values
3825  // no padding since it's always 32B aligned
3826  __hostdev__ static uint64_t memUsage() { return sizeof(LeafFnBase); }
3827 
3828  __hostdev__ static bool hasStats() { return true; }
3829 
3830  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3831  ///
3832  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3833  __hostdev__ static constexpr uint32_t padding()
3834  {
3835  return sizeof(LeafFnBase) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * 4 + 4 * 2);
3836  }
3837  __hostdev__ void init(float min, float max, uint8_t bitWidth)
3838  {
3839  mMinimum = min;
3840  mQuantum = (max - min) / float((1 << bitWidth) - 1);
3841  }
3842 
3843  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3844 
3845  /// @brief return the quantized minimum of the active values in this node
3846  __hostdev__ float getMin() const { return mMin * mQuantum + mMinimum; }
3847 
3848  /// @brief return the quantized maximum of the active values in this node
3849  __hostdev__ float getMax() const { return mMax * mQuantum + mMinimum; }
3850 
3851  /// @brief return the quantized average of the active values in this node
3852  __hostdev__ float getAvg() const { return mAvg * mQuantum + mMinimum; }
3853  /// @brief return the quantized standard deviation of the active values in this node
3854 
3855  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3856  __hostdev__ float getDev() const { return mDev * mQuantum; }
3857 
3858  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3859  __hostdev__ void setMin(float min) { mMin = uint16_t((min - mMinimum) / mQuantum + 0.5f); }
3860 
3861  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3862  __hostdev__ void setMax(float max) { mMax = uint16_t((max - mMinimum) / mQuantum + 0.5f); }
3863 
3864  /// @note min <= avg <= max or 0 <= (avg-min)/(min-max) <= 1
3865  __hostdev__ void setAvg(float avg) { mAvg = uint16_t((avg - mMinimum) / mQuantum + 0.5f); }
3866 
3867  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3868  __hostdev__ void setDev(float dev) { mDev = uint16_t(dev / mQuantum + 0.5f); }
3869 
3870  template<typename T>
3871  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3872 }; // LeafFnBase
3873 
3874 // --------------------------> LeafData<Fp4> <------------------------------------
3875 
3876 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3877 ///
3878 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3879 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3880 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp4, CoordT, MaskT, LOG2DIM>
3881  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3882 {
3884  using BuildType = Fp4;
3885  using ArrayType = uint8_t; // type used for the internal mValue array
3886  static constexpr bool FIXED_SIZE = true;
3887  alignas(32) uint8_t mCode[1u << (3 * LOG2DIM - 1)]; // LeafFnBase is 32B aligned and so is mCode
3888 
3889  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3890  __hostdev__ static constexpr uint32_t padding()
3891  {
3892  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3893  return sizeof(LeafData) - sizeof(BaseT) - (1u << (3 * LOG2DIM - 1));
3894  }
3895 
3896  __hostdev__ static constexpr uint8_t bitWidth() { return 4u; }
3897  __hostdev__ float getValue(uint32_t i) const
3898  {
3899 #if 0
3900  const uint8_t c = mCode[i>>1];
3901  return ( (i&1) ? c >> 4 : c & uint8_t(15) )*BaseT::mQuantum + BaseT::mMinimum;
3902 #else
3903  return ((mCode[i >> 1] >> ((i & 1) << 2)) & uint8_t(15)) * BaseT::mQuantum + BaseT::mMinimum;
3904 #endif
3905  }
3906 
3907  /// @brief This class cannot be constructed or deleted
3908  LeafData() = delete;
3909  LeafData(const LeafData&) = delete;
3910  LeafData& operator=(const LeafData&) = delete;
3911  ~LeafData() = delete;
3912 }; // LeafData<Fp4>
3913 
3914 // --------------------------> LeafBase<Fp8> <------------------------------------
3915 
3916 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3917 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp8, CoordT, MaskT, LOG2DIM>
3918  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3919 {
3921  using BuildType = Fp8;
3922  using ArrayType = uint8_t; // type used for the internal mValue array
3923  static constexpr bool FIXED_SIZE = true;
3924  alignas(32) uint8_t mCode[1u << 3 * LOG2DIM];
3925  __hostdev__ static constexpr int64_t memUsage() { return sizeof(LeafData); }
3926  __hostdev__ static constexpr uint32_t padding()
3927  {
3928  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3929  return sizeof(LeafData) - sizeof(BaseT) - (1u << 3 * LOG2DIM);
3930  }
3931 
3932  __hostdev__ static constexpr uint8_t bitWidth() { return 8u; }
3933  __hostdev__ float getValue(uint32_t i) const
3934  {
3935  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/255 + min
3936  }
3937  /// @brief This class cannot be constructed or deleted
3938  LeafData() = delete;
3939  LeafData(const LeafData&) = delete;
3940  LeafData& operator=(const LeafData&) = delete;
3941  ~LeafData() = delete;
3942 }; // LeafData<Fp8>
3943 
3944 // --------------------------> LeafData<Fp16> <------------------------------------
3945 
3946 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3947 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp16, CoordT, MaskT, LOG2DIM>
3948  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3949 {
3951  using BuildType = Fp16;
3952  using ArrayType = uint16_t; // type used for the internal mValue array
3953  static constexpr bool FIXED_SIZE = true;
3954  alignas(32) uint16_t mCode[1u << 3 * LOG2DIM];
3955 
3956  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3957  __hostdev__ static constexpr uint32_t padding()
3958  {
3959  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3960  return sizeof(LeafData) - sizeof(BaseT) - 2 * (1u << 3 * LOG2DIM);
3961  }
3962 
3963  __hostdev__ static constexpr uint8_t bitWidth() { return 16u; }
3964  __hostdev__ float getValue(uint32_t i) const
3965  {
3966  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/65535 + min
3967  }
3968 
3969  /// @brief This class cannot be constructed or deleted
3970  LeafData() = delete;
3971  LeafData(const LeafData&) = delete;
3972  LeafData& operator=(const LeafData&) = delete;
3973  ~LeafData() = delete;
3974 }; // LeafData<Fp16>
3975 
3976 // --------------------------> LeafData<FpN> <------------------------------------
3977 
3978 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3979 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<FpN, CoordT, MaskT, LOG2DIM>
3980  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3981 { // this class has no additional data members, however every instance is immediately followed by
3982  // bitWidth*64 bytes. Since its base class is 32B aligned so are the bitWidth*64 bytes
3984  using BuildType = FpN;
3985  static constexpr bool FIXED_SIZE = false;
3986  __hostdev__ static constexpr uint32_t padding()
3987  {
3988  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3989  return 0;
3990  }
3991 
3992  __hostdev__ uint8_t bitWidth() const { return 1 << (BaseT::mFlags >> 5); } // 4,8,16,32 = 2^(2,3,4,5)
3993  __hostdev__ size_t memUsage() const { return sizeof(*this) + this->bitWidth() * 64; }
3994  __hostdev__ static size_t memUsage(uint32_t bitWidth) { return 96u + bitWidth * 64; }
3995  __hostdev__ float getValue(uint32_t i) const
3996  {
3997 #ifdef NANOVDB_FPN_BRANCHLESS // faster
3998  const int b = BaseT::mFlags >> 5; // b = 0, 1, 2, 3, 4 corresponding to 1, 2, 4, 8, 16 bits
3999 #if 0 // use LUT
4000  uint16_t code = reinterpret_cast<const uint16_t*>(this + 1)[i >> (4 - b)];
4001  const static uint8_t shift[5] = {15, 7, 3, 1, 0};
4002  const static uint16_t mask[5] = {1, 3, 15, 255, 65535};
4003  code >>= (i & shift[b]) << b;
4004  code &= mask[b];
4005 #else // no LUT
4006  uint32_t code = reinterpret_cast<const uint32_t*>(this + 1)[i >> (5 - b)];
4007  code >>= (i & ((32 >> b) - 1)) << b;
4008  code &= (1 << (1 << b)) - 1;
4009 #endif
4010 #else // use branched version (slow)
4011  float code;
4012  auto* values = reinterpret_cast<const uint8_t*>(this + 1);
4013  switch (BaseT::mFlags >> 5) {
4014  case 0u: // 1 bit float
4015  code = float((values[i >> 3] >> (i & 7)) & uint8_t(1));
4016  break;
4017  case 1u: // 2 bits float
4018  code = float((values[i >> 2] >> ((i & 3) << 1)) & uint8_t(3));
4019  break;
4020  case 2u: // 4 bits float
4021  code = float((values[i >> 1] >> ((i & 1) << 2)) & uint8_t(15));
4022  break;
4023  case 3u: // 8 bits float
4024  code = float(values[i]);
4025  break;
4026  default: // 16 bits float
4027  code = float(reinterpret_cast<const uint16_t*>(values)[i]);
4028  }
4029 #endif
4030  return float(code) * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/UNITS + min
4031  }
4032 
4033  /// @brief This class cannot be constructed or deleted
4034  LeafData() = delete;
4035  LeafData(const LeafData&) = delete;
4036  LeafData& operator=(const LeafData&) = delete;
4037  ~LeafData() = delete;
4038 }; // LeafData<FpN>
4039 
4040 // --------------------------> LeafData<bool> <------------------------------------
4041 
4042 // Partial template specialization of LeafData with bool
4043 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4044 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<bool, CoordT, MaskT, LOG2DIM>
4045 {
4046  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4047  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4048  using ValueType = bool;
4049  using BuildType = bool;
4050  using FloatType = bool; // dummy value type
4051  using ArrayType = MaskT<LOG2DIM>; // type used for the internal mValue array
4052  static constexpr bool FIXED_SIZE = true;
4053 
4054  CoordT mBBoxMin; // 12B.
4055  uint8_t mBBoxDif[3]; // 3B.
4056  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4057  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4058  MaskT<LOG2DIM> mValues; // LOG2DIM(3): 64B.
4059  uint64_t mPadding[2]; // 16B padding to 32B alignment
4060 
4061  __hostdev__ static constexpr uint32_t padding() { return sizeof(LeafData) - 12u - 3u - 1u - 2 * sizeof(MaskT<LOG2DIM>) - 16u; }
4062  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4063  __hostdev__ static bool hasStats() { return false; }
4064  __hostdev__ bool getValue(uint32_t i) const { return mValues.isOn(i); }
4065  __hostdev__ bool getMin() const { return false; } // dummy
4066  __hostdev__ bool getMax() const { return false; } // dummy
4067  __hostdev__ bool getAvg() const { return false; } // dummy
4068  __hostdev__ bool getDev() const { return false; } // dummy
4069  __hostdev__ void setValue(uint32_t offset, bool v)
4070  {
4071  mValueMask.setOn(offset);
4072  mValues.set(offset, v);
4073  }
4074  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4075  __hostdev__ void setMin(const bool&) {} // no-op
4076  __hostdev__ void setMax(const bool&) {} // no-op
4077  __hostdev__ void setAvg(const bool&) {} // no-op
4078  __hostdev__ void setDev(const bool&) {} // no-op
4079 
4080  template<typename T>
4081  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4082 
4083  /// @brief This class cannot be constructed or deleted
4084  LeafData() = delete;
4085  LeafData(const LeafData&) = delete;
4086  LeafData& operator=(const LeafData&) = delete;
4087  ~LeafData() = delete;
4088 }; // LeafData<bool>
4089 
4090 // --------------------------> LeafData<ValueMask> <------------------------------------
4091 
4092 // Partial template specialization of LeafData with ValueMask
4093 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4094 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueMask, CoordT, MaskT, LOG2DIM>
4095 {
4096  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4097  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4098  using ValueType = bool;
4100  using FloatType = bool; // dummy value type
4101  using ArrayType = void; // type used for the internal mValue array - void means missing
4102  static constexpr bool FIXED_SIZE = true;
4103 
4104  CoordT mBBoxMin; // 12B.
4105  uint8_t mBBoxDif[3]; // 3B.
4106  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4107  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4108  uint64_t mPadding[2]; // 16B padding to 32B alignment
4109 
4110  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4111  __hostdev__ static bool hasStats() { return false; }
4112  __hostdev__ static constexpr uint32_t padding()
4113  {
4114  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4115  }
4116 
4117  __hostdev__ bool getValue(uint32_t i) const { return mValueMask.isOn(i); }
4118  __hostdev__ bool getMin() const { return false; } // dummy
4119  __hostdev__ bool getMax() const { return false; } // dummy
4120  __hostdev__ bool getAvg() const { return false; } // dummy
4121  __hostdev__ bool getDev() const { return false; } // dummy
4122  __hostdev__ void setValue(uint32_t offset, bool) { mValueMask.setOn(offset); }
4123  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4124  __hostdev__ void setMin(const ValueType&) {} // no-op
4125  __hostdev__ void setMax(const ValueType&) {} // no-op
4126  __hostdev__ void setAvg(const FloatType&) {} // no-op
4127  __hostdev__ void setDev(const FloatType&) {} // no-op
4128 
4129  template<typename T>
4130  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4131 
4132  /// @brief This class cannot be constructed or deleted
4133  LeafData() = delete;
4134  LeafData(const LeafData&) = delete;
4135  LeafData& operator=(const LeafData&) = delete;
4136  ~LeafData() = delete;
4137 }; // LeafData<ValueMask>
4138 
4139 // --------------------------> LeafIndexBase <------------------------------------
4140 
4141 // Partial template specialization of LeafData with ValueIndex
4142 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4143 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafIndexBase
4144 {
4145  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4146  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4147  using ValueType = uint64_t;
4148  using FloatType = uint64_t;
4149  using ArrayType = void; // type used for the internal mValue array - void means missing
4150  static constexpr bool FIXED_SIZE = true;
4151 
4152  CoordT mBBoxMin; // 12B.
4153  uint8_t mBBoxDif[3]; // 3B.
4154  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4155  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4156  uint64_t mOffset, mPrefixSum; // 8B offset to first value in this leaf node and 9-bit prefix sum
4157  __hostdev__ static constexpr uint32_t padding()
4158  {
4159  return sizeof(LeafIndexBase) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4160  }
4161  __hostdev__ static uint64_t memUsage() { return sizeof(LeafIndexBase); }
4162  __hostdev__ bool hasStats() const { return mFlags & (uint8_t(1) << 4); }
4163  // return the offset to the first value indexed by this leaf node
4164  __hostdev__ const uint64_t& firstOffset() const { return mOffset; }
4165  __hostdev__ void setMin(const ValueType&) {} // no-op
4166  __hostdev__ void setMax(const ValueType&) {} // no-op
4167  __hostdev__ void setAvg(const FloatType&) {} // no-op
4168  __hostdev__ void setDev(const FloatType&) {} // no-op
4169  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4170  template<typename T>
4171  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4172 
4173 protected:
4174  /// @brief This class should be used as an abstract class and only constructed or deleted via child classes
4175  LeafIndexBase() = default;
4176  LeafIndexBase(const LeafIndexBase&) = default;
4177  LeafIndexBase& operator=(const LeafIndexBase&) = default;
4178  ~LeafIndexBase() = default;
4179 }; // LeafIndexBase
4180 
4181 // --------------------------> LeafData<ValueIndex> <------------------------------------
4182 
4183 // Partial template specialization of LeafData with ValueIndex
4184 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4185 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4186  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4187 {
4190  // return the total number of values indexed by this leaf node, excluding the optional 4 stats
4191  __hostdev__ static uint32_t valueCount() { return uint32_t(512); } // 8^3 = 2^9
4192  // return the offset to the last value indexed by this leaf node (disregarding optional stats)
4193  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + 511u; } // 2^9 - 1
4194  // if stats are available, they are always placed after the last voxel value in this leaf node
4195  __hostdev__ uint64_t getMin() const { return this->hasStats() ? BaseT::mOffset + 512u : 0u; }
4196  __hostdev__ uint64_t getMax() const { return this->hasStats() ? BaseT::mOffset + 513u : 0u; }
4197  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? BaseT::mOffset + 514u : 0u; }
4198  __hostdev__ uint64_t getDev() const { return this->hasStats() ? BaseT::mOffset + 515u : 0u; }
4199  __hostdev__ uint64_t getValue(uint32_t i) const { return BaseT::mOffset + i; } // dense leaf node with active and inactive voxels
4200 }; // LeafData<ValueIndex>
4201 
4202 // --------------------------> LeafData<ValueOnIndex> <------------------------------------
4203 
4204 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4205 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4206  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4207 {
4210  __hostdev__ uint32_t valueCount() const
4211  {
4212  return util::countOn(BaseT::mValueMask.words()[7]) + (BaseT::mPrefixSum >> 54u & 511u); // last 9 bits of mPrefixSum do not account for the last word in mValueMask
4213  }
4214  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + this->valueCount() - 1u; }
4215  __hostdev__ uint64_t getMin() const { return this->hasStats() ? this->lastOffset() + 1u : 0u; }
4216  __hostdev__ uint64_t getMax() const { return this->hasStats() ? this->lastOffset() + 2u : 0u; }
4217  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? this->lastOffset() + 3u : 0u; }
4218  __hostdev__ uint64_t getDev() const { return this->hasStats() ? this->lastOffset() + 4u : 0u; }
4219  __hostdev__ uint64_t getValue(uint32_t i) const
4220  {
4221  //return mValueMask.isOn(i) ? mOffset + mValueMask.countOn(i) : 0u;// for debugging
4222  uint32_t n = i >> 6;
4223  const uint64_t w = BaseT::mValueMask.words()[n], mask = uint64_t(1) << (i & 63u);
4224  if (!(w & mask)) return uint64_t(0); // if i'th value is inactive return offset to background value
4225  uint64_t sum = BaseT::mOffset + util::countOn(w & (mask - 1u));
4226  if (n--) sum += BaseT::mPrefixSum >> (9u * n) & 511u;
4227  return sum;
4228  }
4229 }; // LeafData<ValueOnIndex>
4230 
4231 // --------------------------> LeafData<ValueIndexMask> <------------------------------------
4232 
4233 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4234 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndexMask, CoordT, MaskT, LOG2DIM>
4235  : public LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4236 {
4238  MaskT<LOG2DIM> mMask;
4239  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4240  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4241  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4242 }; // LeafData<ValueIndexMask>
4243 
4244 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4245 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndexMask, CoordT, MaskT, LOG2DIM>
4246  : public LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4247 {
4249  MaskT<LOG2DIM> mMask;
4250  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4251  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4252  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4253 }; // LeafData<ValueOnIndexMask>
4254 
4255 // --------------------------> LeafData<Point> <------------------------------------
4256 
4257 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4258 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Point, CoordT, MaskT, LOG2DIM>
4259 {
4260  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4261  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4262  using ValueType = uint64_t;
4263  using BuildType = Point;
4265  using ArrayType = uint16_t; // type used for the internal mValue array
4266  static constexpr bool FIXED_SIZE = true;
4267 
4268  CoordT mBBoxMin; // 12B.
4269  uint8_t mBBoxDif[3]; // 3B.
4270  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4271  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4272 
4273  uint64_t mOffset; // 8B
4274  uint64_t mPointCount; // 8B
4275  alignas(32) uint16_t mValues[1u << 3 * LOG2DIM]; // 1KB
4276  // no padding
4277 
4278  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4279  ///
4280  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4281  __hostdev__ static constexpr uint32_t padding()
4282  {
4283  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u + (1u << 3 * LOG2DIM) * 2u);
4284  }
4285  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4286 
4287  __hostdev__ uint64_t offset() const { return mOffset; }
4288  __hostdev__ uint64_t pointCount() const { return mPointCount; }
4289  __hostdev__ uint64_t first(uint32_t i) const { return i ? uint64_t(mValues[i - 1u]) + mOffset : mOffset; }
4290  __hostdev__ uint64_t last(uint32_t i) const { return uint64_t(mValues[i]) + mOffset; }
4291  __hostdev__ uint64_t getValue(uint32_t i) const { return uint64_t(mValues[i]); }
4292  __hostdev__ void setValueOnly(uint32_t offset, uint16_t value) { mValues[offset] = value; }
4293  __hostdev__ void setValue(uint32_t offset, uint16_t value)
4294  {
4295  mValueMask.setOn(offset);
4296  mValues[offset] = value;
4297  }
4298  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4299 
4300  __hostdev__ ValueType getMin() const { return mOffset; }
4301  __hostdev__ ValueType getMax() const { return mPointCount; }
4302  __hostdev__ FloatType getAvg() const { return 0.0f; }
4303  __hostdev__ FloatType getDev() const { return 0.0f; }
4304 
4305  __hostdev__ void setMin(const ValueType&) {}
4306  __hostdev__ void setMax(const ValueType&) {}
4307  __hostdev__ void setAvg(const FloatType&) {}
4308  __hostdev__ void setDev(const FloatType&) {}
4309 
4310  template<typename T>
4311  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4312 
4313  /// @brief This class cannot be constructed or deleted
4314  LeafData() = delete;
4315  LeafData(const LeafData&) = delete;
4316  LeafData& operator=(const LeafData&) = delete;
4317  ~LeafData() = delete;
4318 }; // LeafData<Point>
4319 
4320 // --------------------------> LeafNode<T> <------------------------------------
4321 
4322 /// @brief Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
4323 template<typename BuildT,
4324  typename CoordT = Coord,
4325  template<uint32_t> class MaskT = Mask,
4326  uint32_t Log2Dim = 3>
4327 class LeafNode : public LeafData<BuildT, CoordT, MaskT, Log2Dim>
4328 {
4329 public:
4331  {
4332  static constexpr uint32_t TOTAL = 0;
4333  static constexpr uint32_t DIM = 1;
4334  __hostdev__ static uint32_t dim() { return 1u; }
4335  }; // Voxel
4338  using ValueType = typename DataType::ValueType;
4339  using FloatType = typename DataType::FloatType;
4340  using BuildType = typename DataType::BuildType;
4341  using CoordType = CoordT;
4342  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4343  template<uint32_t LOG2>
4344  using MaskType = MaskT<LOG2>;
4345  template<bool ON>
4346  using MaskIterT = typename Mask<Log2Dim>::template Iterator<ON>;
4347 
4348  /// @brief Visits all active values in a leaf node
4349  class ValueOnIterator : public MaskIterT<true>
4350  {
4351  using BaseT = MaskIterT<true>;
4352  const LeafNode* mParent;
4353 
4354  public:
4356  : BaseT()
4357  , mParent(nullptr)
4358  {
4359  }
4361  : BaseT(parent->data()->mValueMask.beginOn())
4362  , mParent(parent)
4363  {
4364  }
4365  ValueOnIterator& operator=(const ValueOnIterator&) = default;
4367  {
4368  NANOVDB_ASSERT(*this);
4369  return mParent->getValue(BaseT::pos());
4370  }
4371  __hostdev__ CoordT getCoord() const
4372  {
4373  NANOVDB_ASSERT(*this);
4374  return mParent->offsetToGlobalCoord(BaseT::pos());
4375  }
4376  }; // Member class ValueOnIterator
4377 
4378  __hostdev__ ValueOnIterator beginValueOn() const { return ValueOnIterator(this); }
4379  __hostdev__ ValueOnIterator cbeginValueOn() const { return ValueOnIterator(this); }
4380 
4381  /// @brief Visits all inactive values in a leaf node
4382  class ValueOffIterator : public MaskIterT<false>
4383  {
4384  using BaseT = MaskIterT<false>;
4385  const LeafNode* mParent;
4386 
4387  public:
4389  : BaseT()
4390  , mParent(nullptr)
4391  {
4392  }
4394  : BaseT(parent->data()->mValueMask.beginOff())
4395  , mParent(parent)
4396  {
4397  }
4398  ValueOffIterator& operator=(const ValueOffIterator&) = default;
4400  {
4401  NANOVDB_ASSERT(*this);
4402  return mParent->getValue(BaseT::pos());
4403  }
4404  __hostdev__ CoordT getCoord() const
4405  {
4406  NANOVDB_ASSERT(*this);
4407  return mParent->offsetToGlobalCoord(BaseT::pos());
4408  }
4409  }; // Member class ValueOffIterator
4410 
4411  __hostdev__ ValueOffIterator beginValueOff() const { return ValueOffIterator(this); }
4412  __hostdev__ ValueOffIterator cbeginValueOff() const { return ValueOffIterator(this); }
4413 
4414  /// @brief Visits all values in a leaf node, i.e. both active and inactive values
4416  {
4417  const LeafNode* mParent;
4418  uint32_t mPos;
4419 
4420  public:
4422  : mParent(nullptr)
4423  , mPos(1u << 3 * Log2Dim)
4424  {
4425  }
4427  : mParent(parent)
4428  , mPos(0)
4429  {
4430  NANOVDB_ASSERT(parent);
4431  }
4432  ValueIterator& operator=(const ValueIterator&) = default;
4434  {
4435  NANOVDB_ASSERT(*this);
4436  return mParent->getValue(mPos);
4437  }
4438  __hostdev__ CoordT getCoord() const
4439  {
4440  NANOVDB_ASSERT(*this);
4441  return mParent->offsetToGlobalCoord(mPos);
4442  }
4443  __hostdev__ bool isActive() const
4444  {
4445  NANOVDB_ASSERT(*this);
4446  return mParent->isActive(mPos);
4447  }
4448  __hostdev__ operator bool() const { return mPos < (1u << 3 * Log2Dim); }
4450  {
4451  ++mPos;
4452  return *this;
4453  }
4455  {
4456  auto tmp = *this;
4457  ++(*this);
4458  return tmp;
4459  }
4460  }; // Member class ValueIterator
4461 
4462  __hostdev__ ValueIterator beginValue() const { return ValueIterator(this); }
4463  __hostdev__ ValueIterator cbeginValueAll() const { return ValueIterator(this); }
4464 
4465  static_assert(util::is_same<ValueType, typename BuildToValueMap<BuildType>::Type>::value, "Mismatching BuildType");
4466  static constexpr uint32_t LOG2DIM = Log2Dim;
4467  static constexpr uint32_t TOTAL = LOG2DIM; // needed by parent nodes
4468  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
4469  static constexpr uint32_t SIZE = 1u << 3 * LOG2DIM; // total number of voxels represented by this node
4470  static constexpr uint32_t MASK = (1u << LOG2DIM) - 1u; // mask for bit operations
4471  static constexpr uint32_t LEVEL = 0; // level 0 = leaf
4472  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
4473 
4474  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4475 
4476  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4477 
4478  /// @brief Return a const reference to the bit mask of active voxels in this leaf node
4479  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
4480  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
4481 
4482  /// @brief Return a const reference to the minimum active value encoded in this leaf node
4483  __hostdev__ ValueType minimum() const { return DataType::getMin(); }
4484 
4485  /// @brief Return a const reference to the maximum active value encoded in this leaf node
4486  __hostdev__ ValueType maximum() const { return DataType::getMax(); }
4487 
4488  /// @brief Return a const reference to the average of all the active values encoded in this leaf node
4489  __hostdev__ FloatType average() const { return DataType::getAvg(); }
4490 
4491  /// @brief Return the variance of all the active values encoded in this leaf node
4492  __hostdev__ FloatType variance() const { return Pow2(DataType::getDev()); }
4493 
4494  /// @brief Return a const reference to the standard deviation of all the active values encoded in this leaf node
4495  __hostdev__ FloatType stdDeviation() const { return DataType::getDev(); }
4496 
4497  __hostdev__ uint8_t flags() const { return DataType::mFlags; }
4498 
4499  /// @brief Return the origin in index space of this leaf node
4500  __hostdev__ CoordT origin() const { return DataType::mBBoxMin & ~MASK; }
4501 
4502  /// @brief Compute the local coordinates from a linear offset
4503  /// @param n Linear offset into this nodes dense table
4504  /// @return Local (vs global) 3D coordinates
4505  __hostdev__ static CoordT OffsetToLocalCoord(uint32_t n)
4506  {
4507  NANOVDB_ASSERT(n < SIZE);
4508  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
4509  return CoordT(n >> 2 * LOG2DIM, m >> LOG2DIM, m & MASK);
4510  }
4511 
4512  /// @brief Converts (in place) a local index coordinate to a global index coordinate
4513  __hostdev__ void localToGlobalCoord(Coord& ijk) const { ijk += this->origin(); }
4514 
4515  __hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
4516  {
4517  return OffsetToLocalCoord(n) + this->origin();
4518  }
4519 
4520  /// @brief Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!)
4521  __hostdev__ static uint32_t dim() { return 1u << LOG2DIM; }
4522 
4523  /// @brief Return the bounding box in index space of active values in this leaf node
4524  __hostdev__ math::BBox<CoordT> bbox() const
4525  {
4526  math::BBox<CoordT> bbox(DataType::mBBoxMin, DataType::mBBoxMin);
4527  if (this->hasBBox()) {
4528  bbox.max()[0] += DataType::mBBoxDif[0];
4529  bbox.max()[1] += DataType::mBBoxDif[1];
4530  bbox.max()[2] += DataType::mBBoxDif[2];
4531  } else { // very rare case
4532  bbox = math::BBox<CoordT>(); // invalid
4533  }
4534  return bbox;
4535  }
4536 
4537  /// @brief Return the total number of voxels (e.g. values) encoded in this leaf node
4538  __hostdev__ static uint32_t voxelCount() { return 1u << (3 * LOG2DIM); }
4539 
4540  __hostdev__ static uint32_t padding() { return DataType::padding(); }
4541 
4542  /// @brief return memory usage in bytes for the leaf node
4543  __hostdev__ uint64_t memUsage() const { return DataType::memUsage(); }
4544 
4545  /// @brief This class cannot be constructed or deleted
4546  LeafNode() = delete;
4547  LeafNode(const LeafNode&) = delete;
4548  LeafNode& operator=(const LeafNode&) = delete;
4549  ~LeafNode() = delete;
4550 
4551  /// @brief Return the voxel value at the given offset.
4552  __hostdev__ ValueType getValue(uint32_t offset) const { return DataType::getValue(offset); }
4553 
4554  /// @brief Return the voxel value at the given coordinate.
4555  __hostdev__ ValueType getValue(const CoordT& ijk) const { return DataType::getValue(CoordToOffset(ijk)); }
4556 
4557  /// @brief Return the first value in this leaf node.
4558  __hostdev__ ValueType getFirstValue() const { return this->getValue(0); }
4559  /// @brief Return the last value in this leaf node.
4560  __hostdev__ ValueType getLastValue() const { return this->getValue(SIZE - 1); }
4561 
4562  /// @brief Sets the value at the specified location and activate its state.
4563  ///
4564  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4565  __hostdev__ void setValue(const CoordT& ijk, const ValueType& v) { DataType::setValue(CoordToOffset(ijk), v); }
4566 
4567  /// @brief Sets the value at the specified location but leaves its state unchanged.
4568  ///
4569  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4570  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& v) { DataType::setValueOnly(offset, v); }
4571  __hostdev__ void setValueOnly(const CoordT& ijk, const ValueType& v) { DataType::setValueOnly(CoordToOffset(ijk), v); }
4572 
4573  /// @brief Return @c true if the voxel value at the given coordinate is active.
4574  __hostdev__ bool isActive(const CoordT& ijk) const { return DataType::mValueMask.isOn(CoordToOffset(ijk)); }
4575  __hostdev__ bool isActive(uint32_t n) const { return DataType::mValueMask.isOn(n); }
4576 
4577  /// @brief Return @c true if any of the voxel value are active in this leaf node.
4578  __hostdev__ bool isActive() const
4579  {
4580  //NANOVDB_ASSERT( bool(DataType::mFlags & uint8_t(2)) != DataType::mValueMask.isOff() );
4581  //return DataType::mFlags & uint8_t(2);
4582  return !DataType::mValueMask.isOff();
4583  }
4584 
4585  __hostdev__ bool hasBBox() const { return DataType::mFlags & uint8_t(2); }
4586 
4587  /// @brief Return @c true if the voxel value at the given coordinate is active and updates @c v with the value.
4588  __hostdev__ bool probeValue(const CoordT& ijk, ValueType& v) const
4589  {
4590  const uint32_t n = CoordToOffset(ijk);
4591  v = DataType::getValue(n);
4592  return DataType::mValueMask.isOn(n);
4593  }
4594 
4595  __hostdev__ const LeafNode* probeLeaf(const CoordT&) const { return this; }
4596 
4597  /// @brief Return the linear offset corresponding to the given coordinate
4598  __hostdev__ static uint32_t CoordToOffset(const CoordT& ijk)
4599  {
4600  return ((ijk[0] & MASK) << (2 * LOG2DIM)) | ((ijk[1] & MASK) << LOG2DIM) | (ijk[2] & MASK);
4601  }
4602 
4603  /// @brief Updates the local bounding box of active voxels in this node. Return true if bbox was updated.
4604  ///
4605  /// @warning It assumes that the origin and value mask have already been set.
4606  ///
4607  /// @details This method is based on few (intrinsic) bit operations and hence is relatively fast.
4608  /// However, it should only only be called if either the value mask has changed or if the
4609  /// active bounding box is still undefined. e.g. during construction of this node.
4610  __hostdev__ bool updateBBox();
4611 
4612  template<typename OpT, typename... ArgsT>
4613  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4614  {
4615  return OpT::get(*this, CoordToOffset(ijk), args...);
4616  }
4617 
4618  template<typename OpT, typename... ArgsT>
4619  __hostdev__ auto get(const uint32_t n, ArgsT&&... args) const
4620  {
4621  return OpT::get(*this, n, args...);
4622  }
4623 
4624  template<typename OpT, typename... ArgsT>
4625  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
4626  {
4627  return OpT::set(*this, CoordToOffset(ijk), args...);
4628  }
4629 
4630  template<typename OpT, typename... ArgsT>
4631  __hostdev__ auto set(const uint32_t n, ArgsT&&... args)
4632  {
4633  return OpT::set(*this, n, args...);
4634  }
4635 
4636 private:
4637  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(LeafData) is misaligned");
4638 
4639  template<typename, int, int, int>
4640  friend class ReadAccessor;
4641 
4642  template<typename>
4643  friend class RootNode;
4644  template<typename, uint32_t>
4645  friend class InternalNode;
4646 
4647 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
4648  /// @brief Private method to return a voxel value and update a (dummy) ReadAccessor
4649  template<typename AccT>
4650  __hostdev__ ValueType getValueAndCache(const CoordT& ijk, const AccT&) const { return this->getValue(ijk); }
4651 
4652  /// @brief Return the node information.
4653  template<typename AccT>
4654  __hostdev__ typename AccT::NodeInfo getNodeInfoAndCache(const CoordType& /*ijk*/, const AccT& /*acc*/) const
4655  {
4656  using NodeInfoT = typename AccT::NodeInfo;
4657  return NodeInfoT{LEVEL, this->dim(), this->minimum(), this->maximum(), this->average(), this->stdDeviation(), this->bbox()[0], this->bbox()[1]};
4658  }
4659 
4660  template<typename AccT>
4661  __hostdev__ bool isActiveAndCache(const CoordT& ijk, const AccT&) const { return this->isActive(ijk); }
4662 
4663  template<typename AccT>
4664  __hostdev__ bool probeValueAndCache(const CoordT& ijk, ValueType& v, const AccT&) const { return this->probeValue(ijk, v); }
4665 
4666  template<typename AccT>
4667  __hostdev__ const LeafNode* probeLeafAndCache(const CoordT&, const AccT&) const { return this; }
4668 #endif
4669 
4670  template<typename RayT, typename AccT>
4671  __hostdev__ uint32_t getDimAndCache(const CoordT&, const RayT& /*ray*/, const AccT&) const
4672  {
4673  if (DataType::mFlags & uint8_t(1u))
4674  return this->dim(); // skip this node if the 1st bit is set
4675 
4676  //if (!ray.intersects( this->bbox() )) return 1 << LOG2DIM;
4677  return ChildNodeType::dim();
4678  }
4679 
4680  template<typename OpT, typename AccT, typename... ArgsT>
4681  __hostdev__ auto
4682  //__hostdev__ decltype(OpT::get(util::declval<const LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4683  getAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args) const
4684  {
4685  return OpT::get(*this, CoordToOffset(ijk), args...);
4686  }
4687 
4688  template<typename OpT, typename AccT, typename... ArgsT>
4689  //__hostdev__ auto // occasionally fails with NVCC
4690  __hostdev__ decltype(OpT::set(util::declval<LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4691  setAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args)
4692  {
4693  return OpT::set(*this, CoordToOffset(ijk), args...);
4694  }
4695 
4696 }; // LeafNode class
4697 
4698 // --------------------------> LeafNode<T>::updateBBox <------------------------------------
4699 
4700 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4702 {
4703  static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
4704  if (DataType::mValueMask.isOff()) {
4705  DataType::mFlags &= ~uint8_t(2); // set 2nd bit off, which indicates that this nodes has no bbox
4706  return false;
4707  }
4708  auto update = [&](uint32_t min, uint32_t max, int axis) {
4709  NANOVDB_ASSERT(min <= max && max < 8);
4710  DataType::mBBoxMin[axis] = (DataType::mBBoxMin[axis] & ~MASK) + int(min);
4711  DataType::mBBoxDif[axis] = uint8_t(max - min);
4712  };
4713  uint64_t *w = DataType::mValueMask.words(), word64 = *w;
4714  uint32_t Xmin = word64 ? 0u : 8u, Xmax = Xmin;
4715  for (int i = 1; i < 8; ++i) { // last loop over 8 64 bit words
4716  if (w[i]) { // skip if word has no set bits
4717  word64 |= w[i]; // union 8 x 64 bits words into one 64 bit word
4718  if (Xmin == 8)
4719  Xmin = i; // only set once
4720  Xmax = i;
4721  }
4722  }
4723  NANOVDB_ASSERT(word64);
4724  update(Xmin, Xmax, 0);
4725  update(util::findLowestOn(word64) >> 3, util::findHighestOn(word64) >> 3, 1);
4726  const uint32_t *p = reinterpret_cast<const uint32_t*>(&word64), word32 = p[0] | p[1];
4727  const uint16_t *q = reinterpret_cast<const uint16_t*>(&word32), word16 = q[0] | q[1];
4728  const uint8_t *b = reinterpret_cast<const uint8_t*>(&word16), byte = b[0] | b[1];
4729  NANOVDB_ASSERT(byte);
4730  update(util::findLowestOn(static_cast<uint32_t>(byte)), util::findHighestOn(static_cast<uint32_t>(byte)), 2);
4731  DataType::mFlags |= uint8_t(2); // set 2nd bit on, which indicates that this nodes has a bbox
4732  return true;
4733 } // LeafNode::updateBBox
4734 
4735 // --------------------------> Template specializations and traits <------------------------------------
4736 
4737 /// @brief Template specializations to the default configuration used in OpenVDB:
4738 /// Root -> 32^3 -> 16^3 -> 8^3
4739 template<typename BuildT>
4741 template<typename BuildT>
4743 template<typename BuildT>
4745 template<typename BuildT>
4747 template<typename BuildT>
4749 template<typename BuildT>
4751 
4752 /// @brief Trait to map from LEVEL to node type
4753 template<typename BuildT, int LEVEL>
4754 struct NanoNode;
4755 
4756 // Partial template specialization of above Node struct
4757 template<typename BuildT>
4758 struct NanoNode<BuildT, 0>
4759 {
4762 };
4763 template<typename BuildT>
4764 struct NanoNode<BuildT, 1>
4765 {
4768 };
4769 template<typename BuildT>
4770 struct NanoNode<BuildT, 2>
4771 {
4774 };
4775 template<typename BuildT>
4776 struct NanoNode<BuildT, 3>
4777 {
4780 };
4781 
4802 
4824 
4825 // --------------------------> callNanoGrid <------------------------------------
4826 
4827 /**
4828 * @brief Below is an example of the struct used for generic programming with callNanoGrid
4829 * @details For an example see "struct Crc32TailOld" in nanovdb/tools/GridChecksum.h or
4830 * "struct IsNanoGridValid" in nanovdb/tools/GridValidator.h
4831 * @code
4832 * struct OpT {
4833  // define these two static functions with non-const GridData
4834 * template <typename BuildT>
4835 * static auto known( GridData *gridData, args...);
4836 * static auto unknown( GridData *gridData, args...);
4837 * // or alternatively these two static functions with const GridData
4838 * template <typename BuildT>
4839 * static auto known(const GridData *gridData, args...);
4840 * static auto unknown(const GridData *gridData, args...);
4841 * };
4842 * @endcode
4843 *
4844 * @brief Here is an example of how to use callNanoGrid in client code
4845 * @code
4846 * return callNanoGrid<OpT>(gridData, args...);
4847 * @endcode
4848 */
4849 
4850 /// @brief Use this function, which depends a pointer to GridData, to call
4851 /// other functions that depend on a NanoGrid of a known ValueType.
4852 /// @details This function allows for generic programming by converting GridData
4853 /// to a NanoGrid of the type encoded in GridData::mGridType.
4854 template<typename OpT, typename GridDataT, typename... ArgsT>
4855 auto callNanoGrid(GridDataT *gridData, ArgsT&&... args)
4856 {
4857  static_assert(util::is_same<GridDataT, GridData, const GridData>::value, "Expected gridData to be of type GridData* or const GridData*");
4858  switch (gridData->mGridType){
4859  case GridType::Float:
4860  return OpT::template known<float>(gridData, args...);
4861  case GridType::Double:
4862  return OpT::template known<double>(gridData, args...);
4863  case GridType::Int16:
4864  return OpT::template known<int16_t>(gridData, args...);
4865  case GridType::Int32:
4866  return OpT::template known<int32_t>(gridData, args...);
4867  case GridType::Int64:
4868  return OpT::template known<int64_t>(gridData, args...);
4869  case GridType::Vec3f:
4870  return OpT::template known<Vec3f>(gridData, args...);
4871  case GridType::Vec3d:
4872  return OpT::template known<Vec3d>(gridData, args...);
4873  case GridType::UInt32:
4874  return OpT::template known<uint32_t>(gridData, args...);
4875  case GridType::Mask:
4876  return OpT::template known<ValueMask>(gridData, args...);
4877  case GridType::Index:
4878  return OpT::template known<ValueIndex>(gridData, args...);
4879  case GridType::OnIndex:
4880  return OpT::template known<ValueOnIndex>(gridData, args...);
4881  case GridType::IndexMask:
4882  return OpT::template known<ValueIndexMask>(gridData, args...);
4883  case GridType::OnIndexMask:
4884  return OpT::template known<ValueOnIndexMask>(gridData, args...);
4885  case GridType::Boolean:
4886  return OpT::template known<bool>(gridData, args...);
4887  case GridType::RGBA8:
4888  return OpT::template known<math::Rgba8>(gridData, args...);
4889  case GridType::Fp4:
4890  return OpT::template known<Fp4>(gridData, args...);
4891  case GridType::Fp8:
4892  return OpT::template known<Fp8>(gridData, args...);
4893  case GridType::Fp16:
4894  return OpT::template known<Fp16>(gridData, args...);
4895  case GridType::FpN:
4896  return OpT::template known<FpN>(gridData, args...);
4897  case GridType::Vec4f:
4898  return OpT::template known<Vec4f>(gridData, args...);
4899  case GridType::Vec4d:
4900  return OpT::template known<Vec4d>(gridData, args...);
4901  case GridType::UInt8:
4902  return OpT::template known<uint8_t>(gridData, args...);
4903  default:
4904  return OpT::unknown(gridData, args...);
4905  }
4906 }// callNanoGrid
4907 
4908 // --------------------------> ReadAccessor <------------------------------------
4909 
4910 /// @brief A read-only value accessor with three levels of node caching. This allows for
4911 /// inverse tree traversal during lookup, which is on average significantly faster
4912 /// than calling the equivalent method on the tree (i.e. top-down traversal).
4913 ///
4914 /// @note By virtue of the fact that a value accessor accelerates random access operations
4915 /// by re-using cached access patterns, this access should be reused for multiple access
4916 /// operations. In other words, never create an instance of this accessor for a single
4917 /// access only. In general avoid single access operations with this accessor, and
4918 /// if that is not possible call the corresponding method on the tree instead.
4919 ///
4920 /// @warning Since this ReadAccessor internally caches raw pointers to the nodes of the tree
4921 /// structure, it is not safe to copy between host and device, or even to share among
4922 /// multiple threads on the same host or device. However, it is light-weight so simple
4923 /// instantiate one per thread (on the host and/or device).
4924 ///
4925 /// @details Used to accelerated random access into a VDB tree. Provides on average
4926 /// O(1) random access operations by means of inverse tree traversal,
4927 /// which amortizes the non-const time complexity of the root node.
4928 
4929 template<typename BuildT>
4930 class ReadAccessor<BuildT, -1, -1, -1>
4931 {
4932  using GridT = NanoGrid<BuildT>; // grid
4933  using TreeT = NanoTree<BuildT>; // tree
4934  using RootT = NanoRoot<BuildT>; // root node
4935  using LeafT = NanoLeaf<BuildT>; // Leaf node
4936  using FloatType = typename RootT::FloatType;
4937  using CoordValueType = typename RootT::CoordType::ValueType;
4938 
4939  mutable const RootT* mRoot; // 8 bytes (mutable to allow for access methods to be const)
4940 public:
4941  using BuildType = BuildT;
4942  using ValueType = typename RootT::ValueType;
4943  using CoordType = typename RootT::CoordType;
4944 
4945  static const int CacheLevels = 0;
4946 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
4947  struct NodeInfo
4948  {
4949  uint32_t mLevel; // 4B
4950  uint32_t mDim; // 4B
4951  ValueType mMinimum; // typically 4B
4952  ValueType mMaximum; // typically 4B
4953  FloatType mAverage; // typically 4B
4954  FloatType mStdDevi; // typically 4B
4955  CoordType mBBoxMin; // 3*4B
4956  CoordType mBBoxMax; // 3*4B
4957  };
4958 #endif
4959  /// @brief Constructor from a root node
4961  : mRoot{&root}
4962  {
4963  }
4964 
4965  /// @brief Constructor from a grid
4966  __hostdev__ ReadAccessor(const GridT& grid)
4967  : ReadAccessor(grid.tree().root())
4968  {
4969  }
4970 
4971  /// @brief Constructor from a tree
4973  : ReadAccessor(tree.root())
4974  {
4975  }
4976 
4977  /// @brief Reset this access to its initial state, i.e. with an empty cache
4978  /// @node Noop since this template specialization has no cache
4979  __hostdev__ void clear() {}
4980 
4981  __hostdev__ const RootT& root() const { return *mRoot; }
4982 
4983  /// @brief Defaults constructors
4984  ReadAccessor(const ReadAccessor&) = default;
4985  ~ReadAccessor() = default;
4986  ReadAccessor& operator=(const ReadAccessor&) = default;
4987 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
4989  {
4990  return this->template get<GetValue<BuildT>>(ijk);
4991  }
4992  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4993  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4994  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4995  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4996  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4997  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4998  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4999 #else // NANOVDB_NEW_ACCESSOR_METHODS
5000  __hostdev__ ValueType getValue(const CoordType& ijk) const
5001  {
5002  return mRoot->getValueAndCache(ijk, *this);
5003  }
5004  __hostdev__ ValueType getValue(int i, int j, int k) const
5005  {
5006  return this->getValue(CoordType(i, j, k));
5007  }
5008  __hostdev__ ValueType operator()(const CoordType& ijk) const
5009  {
5010  return this->getValue(ijk);
5011  }
5012  __hostdev__ ValueType operator()(int i, int j, int k) const
5013  {
5014  return this->getValue(CoordType(i, j, k));
5015  }
5016 
5017  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
5018  {
5019  return mRoot->getNodeInfoAndCache(ijk, *this);
5020  }
5021 
5022  __hostdev__ bool isActive(const CoordType& ijk) const
5023  {
5024  return mRoot->isActiveAndCache(ijk, *this);
5025  }
5026 
5027  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
5028  {
5029  return mRoot->probeValueAndCache(ijk, v, *this);
5030  }
5031 
5032  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
5033  {
5034  return mRoot->probeLeafAndCache(ijk, *this);
5035  }
5036 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5037  template<typename RayT>
5038  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5039  {
5040  return mRoot->getDimAndCache(ijk, ray, *this);
5041  }
5042  template<typename OpT, typename... ArgsT>
5043  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
5044  {
5045  return mRoot->template get<OpT>(ijk, args...);
5046  }
5047 
5048  template<typename OpT, typename... ArgsT>
5049  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
5050  {
5051  return const_cast<RootT*>(mRoot)->template set<OpT>(ijk, args...);
5052  }
5053 
5054 private:
5055  /// @brief Allow nodes to insert themselves into the cache.
5056  template<typename>
5057  friend class RootNode;
5058  template<typename, uint32_t>
5059  friend class InternalNode;
5060  template<typename, typename, template<uint32_t> class, uint32_t>
5061  friend class LeafNode;
5062 
5063  /// @brief No-op
5064  template<typename NodeT>
5065  __hostdev__ void insert(const CoordType&, const NodeT*) const {}
5066 }; // ReadAccessor<ValueT, -1, -1, -1> class
5067 
5068 /// @brief Node caching at a single tree level
5069 template<typename BuildT, int LEVEL0>
5070 class ReadAccessor<BuildT, LEVEL0, -1, -1> //e.g. 0, 1, 2
5071 {
5072  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 should be 0, 1, or 2");
5073 
5074  using GridT = NanoGrid<BuildT>; // grid
5075  using TreeT = NanoTree<BuildT>;
5076  using RootT = NanoRoot<BuildT>; // root node
5077  using LeafT = NanoLeaf<BuildT>; // Leaf node
5078  using NodeT = typename NodeTrait<TreeT, LEVEL0>::type;
5079  using CoordT = typename RootT::CoordType;
5080  using ValueT = typename RootT::ValueType;
5081 
5082  using FloatType = typename RootT::FloatType;
5083  using CoordValueType = typename RootT::CoordT::ValueType;
5084 
5085  // All member data are mutable to allow for access methods to be const
5086  mutable CoordT mKey; // 3*4 = 12 bytes
5087  mutable const RootT* mRoot; // 8 bytes
5088  mutable const NodeT* mNode; // 8 bytes
5089 
5090 public:
5091  using BuildType = BuildT;
5092  using ValueType = ValueT;
5093  using CoordType = CoordT;
5094 
5095  static const int CacheLevels = 1;
5096 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
5097  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
5098 #endif
5099  /// @brief Constructor from a root node
5101  : mKey(CoordType::max())
5102  , mRoot(&root)
5103  , mNode(nullptr)
5104  {
5105  }
5106 
5107  /// @brief Constructor from a grid
5109  : ReadAccessor(grid.tree().root())
5110  {
5111  }
5112 
5113  /// @brief Constructor from a tree
5115  : ReadAccessor(tree.root())
5116  {
5117  }
5118 
5119  /// @brief Reset this access to its initial state, i.e. with an empty cache
5121  {
5122  mKey = CoordType::max();
5123  mNode = nullptr;
5124  }
5125 
5126  __hostdev__ const RootT& root() const { return *mRoot; }
5127 
5128  /// @brief Defaults constructors
5129  ReadAccessor(const ReadAccessor&) = default;
5130  ~ReadAccessor() = default;
5131  ReadAccessor& operator=(const ReadAccessor&) = default;
5132 
5133  __hostdev__ bool isCached(const CoordType& ijk) const
5134  {
5135  return (ijk[0] & int32_t(~NodeT::MASK)) == mKey[0] &&
5136  (ijk[1] & int32_t(~NodeT::MASK)) == mKey[1] &&
5137  (ijk[2] & int32_t(~NodeT::MASK)) == mKey[2];
5138  }
5139 
5140 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
5142  {
5143  return this->template get<GetValue<BuildT>>(ijk);
5144  }
5145  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5146  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5147  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5148  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5149  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5150  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5151  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5152 #else // NANOVDB_NEW_ACCESSOR_METHODS
5153  __hostdev__ ValueType getValue(const CoordType& ijk) const
5154  {
5155  if (this->isCached(ijk))
5156  return mNode->getValueAndCache(ijk, *this);
5157  return mRoot->getValueAndCache(ijk, *this);
5158  }
5159  __hostdev__ ValueType getValue(int i, int j, int k) const
5160  {
5161  return this->getValue(CoordType(i, j, k));
5162  }
5163  __hostdev__ ValueType operator()(const CoordType& ijk) const
5164  {
5165  return this->getValue(ijk);
5166  }
5167  __hostdev__ ValueType operator()(int i, int j, int k) const
5168  {
5169  return this->getValue(CoordType(i, j, k));
5170  }
5171 
5172  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
5173  {
5174  if (this->isCached(ijk))
5175  return mNode->getNodeInfoAndCache(ijk, *this);
5176  return mRoot->getNodeInfoAndCache(ijk, *this);
5177  }
5178 
5179  __hostdev__ bool isActive(const CoordType& ijk) const
5180  {
5181  if (this->isCached(ijk))
5182  return mNode->isActiveAndCache(ijk, *this);
5183  return mRoot->isActiveAndCache(ijk, *this);
5184  }
5185 
5186  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
5187  {
5188  if (this->isCached(ijk))
5189  return mNode->probeValueAndCache(ijk, v, *this);
5190  return mRoot->probeValueAndCache(ijk, v, *this);
5191  }
5192 
5193  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
5194  {
5195  if (this->isCached(ijk))
5196  return mNode->probeLeafAndCache(ijk, *this);
5197  return mRoot->probeLeafAndCache(ijk, *this);
5198  }
5199 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5200  template<typename RayT>
5201  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5202  {
5203  if (this->isCached(ijk))
5204  return mNode->getDimAndCache(ijk, ray, *this);
5205  return mRoot->getDimAndCache(ijk, ray, *this);
5206  }
5207 
5208  template<typename OpT, typename... ArgsT>
5209  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
5210  {
5211  if (this->isCached(ijk))
5212  return mNode->template getAndCache<OpT>(ijk, *this, args...);
5213  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5214  }
5215 
5216  template<typename OpT, typename... ArgsT>
5217  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
5218  {
5219  if (this->isCached(ijk))
5220  return const_cast<NodeT*>(mNode)->template setAndCache<OpT>(ijk, *this, args...);
5221  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5222  }
5223 
5224 private:
5225  /// @brief Allow nodes to insert themselves into the cache.
5226  template<typename>
5227  friend class RootNode;
5228  template<typename, uint32_t>
5229  friend class InternalNode;
5230  template<typename, typename, template<uint32_t> class, uint32_t>
5231  friend class LeafNode;
5232 
5233  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5234  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
5235  {
5236  mKey = ijk & ~NodeT::MASK;
5237  mNode = node;
5238  }
5239 
5240  // no-op
5241  template<typename OtherNodeT>
5242  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5243 
5244 }; // ReadAccessor<ValueT, LEVEL0>
5245 
5246 template<typename BuildT, int LEVEL0, int LEVEL1>
5247 class ReadAccessor<BuildT, LEVEL0, LEVEL1, -1> //e.g. (0,1), (1,2), (0,2)
5248 {
5249  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 must be 0, 1, 2");
5250  static_assert(LEVEL1 >= 0 && LEVEL1 <= 2, "LEVEL1 must be 0, 1, 2");
5251  static_assert(LEVEL0 < LEVEL1, "Level 0 must be lower than level 1");
5252  using GridT = NanoGrid<BuildT>; // grid
5253  using TreeT = NanoTree<BuildT>;
5254  using RootT = NanoRoot<BuildT>;
5255  using LeafT = NanoLeaf<BuildT>;
5256  using Node1T = typename NodeTrait<TreeT, LEVEL0>::type;
5257  using Node2T = typename NodeTrait<TreeT, LEVEL1>::type;
5258  using CoordT = typename RootT::CoordType;
5259  using ValueT = typename RootT::ValueType;
5260  using FloatType = typename RootT::FloatType;
5261  using CoordValueType = typename RootT::CoordT::ValueType;
5262 
5263  // All member data are mutable to allow for access methods to be const
5264 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5265  mutable CoordT mKey; // 3*4 = 12 bytes
5266 #else // 68 bytes total
5267  mutable CoordT mKeys[2]; // 2*3*4 = 24 bytes
5268 #endif
5269  mutable const RootT* mRoot;
5270  mutable const Node1T* mNode1;
5271  mutable const Node2T* mNode2;
5272 
5273 public:
5274  using BuildType = BuildT;
5275  using ValueType = ValueT;
5276  using CoordType = CoordT;
5277 
5278  static const int CacheLevels = 2;
5279 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
5280  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
5281 #endif
5282  /// @brief Constructor from a root node
5284 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5285  : mKey(CoordType::max())
5286 #else
5287  : mKeys{CoordType::max(), CoordType::max()}
5288 #endif
5289  , mRoot(&root)
5290  , mNode1(nullptr)
5291  , mNode2(nullptr)
5292  {
5293  }
5294 
5295  /// @brief Constructor from a grid
5297  : ReadAccessor(grid.tree().root())
5298  {
5299  }
5300 
5301  /// @brief Constructor from a tree
5303  : ReadAccessor(tree.root())
5304  {
5305  }
5306 
5307  /// @brief Reset this access to its initial state, i.e. with an empty cache
5309  {
5310 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5311  mKey = CoordType::max();
5312 #else
5313  mKeys[0] = mKeys[1] = CoordType::max();
5314 #endif
5315  mNode1 = nullptr;
5316  mNode2 = nullptr;
5317  }
5318 
5319  __hostdev__ const RootT& root() const { return *mRoot; }
5320 
5321  /// @brief Defaults constructors
5322  ReadAccessor(const ReadAccessor&) = default;
5323  ~ReadAccessor() = default;
5324  ReadAccessor& operator=(const ReadAccessor&) = default;
5325 
5326 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5327  __hostdev__ bool isCached1(CoordValueType dirty) const
5328  {
5329  if (!mNode1)
5330  return false;
5331  if (dirty & int32_t(~Node1T::MASK)) {
5332  mNode1 = nullptr;
5333  return false;
5334  }
5335  return true;
5336  }
5337  __hostdev__ bool isCached2(CoordValueType dirty) const
5338  {
5339  if (!mNode2)
5340  return false;
5341  if (dirty & int32_t(~Node2T::MASK)) {
5342  mNode2 = nullptr;
5343  return false;
5344  }
5345  return true;
5346  }
5347  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5348  {
5349  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5350  }
5351 #else
5352  __hostdev__ bool isCached1(const CoordType& ijk) const
5353  {
5354  return (ijk[0] & int32_t(~Node1T::MASK)) == mKeys[0][0] &&
5355  (ijk[1] & int32_t(~Node1T::MASK)) == mKeys[0][1] &&
5356  (ijk[2] & int32_t(~Node1T::MASK)) == mKeys[0][2];
5357  }
5358  __hostdev__ bool isCached2(const CoordType& ijk) const
5359  {
5360  return (ijk[0] & int32_t(~Node2T::MASK)) == mKeys[1][0] &&
5361  (ijk[1] & int32_t(~Node2T::MASK)) == mKeys[1][1] &&
5362  (ijk[2] & int32_t(~Node2T::MASK)) == mKeys[1][2];
5363  }
5364 #endif
5365 
5366 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
5368  {
5369  return this->template get<GetValue<BuildT>>(ijk);
5370  }
5371  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5372  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5373  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5374  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5375  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5376  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5377  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5378 #else // NANOVDB_NEW_ACCESSOR_METHODS
5379 
5380  __hostdev__ ValueType getValue(const CoordType& ijk) const
5381  {
5382 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5383  const CoordValueType dirty = this->computeDirty(ijk);
5384 #else
5385  auto&& dirty = ijk;
5386 #endif
5387  if (this->isCached1(dirty)) {
5388  return mNode1->getValueAndCache(ijk, *this);
5389  } else if (this->isCached2(dirty)) {
5390  return mNode2->getValueAndCache(ijk, *this);
5391  }
5392  return mRoot->getValueAndCache(ijk, *this);
5393  }
5394  __hostdev__ ValueType operator()(const CoordType& ijk) const
5395  {
5396  return this->getValue(ijk);
5397  }
5398  __hostdev__ ValueType operator()(int i, int j, int k) const
5399  {
5400  return this->getValue(CoordType(i, j, k));
5401  }
5402  __hostdev__ ValueType getValue(int i, int j, int k) const
5403  {
5404  return this->getValue(CoordType(i, j, k));
5405  }
5406  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
5407  {
5408 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5409  const CoordValueType dirty = this->computeDirty(ijk);
5410 #else
5411  auto&& dirty = ijk;
5412 #endif
5413  if (this->isCached1(dirty)) {
5414  return mNode1->getNodeInfoAndCache(ijk, *this);
5415  } else if (this->isCached2(dirty)) {
5416  return mNode2->getNodeInfoAndCache(ijk, *this);
5417  }
5418  return mRoot->getNodeInfoAndCache(ijk, *this);
5419  }
5420 
5421  __hostdev__ bool isActive(const CoordType& ijk) const
5422  {
5423 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5424  const CoordValueType dirty = this->computeDirty(ijk);
5425 #else
5426  auto&& dirty = ijk;
5427 #endif
5428  if (this->isCached1(dirty)) {
5429  return mNode1->isActiveAndCache(ijk, *this);
5430  } else if (this->isCached2(dirty)) {
5431  return mNode2->isActiveAndCache(ijk, *this);
5432  }
5433  return mRoot->isActiveAndCache(ijk, *this);
5434  }
5435 
5436  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
5437  {
5438 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5439  const CoordValueType dirty = this->computeDirty(ijk);
5440 #else
5441  auto&& dirty = ijk;
5442 #endif
5443  if (this->isCached1(dirty)) {
5444  return mNode1->probeValueAndCache(ijk, v, *this);
5445  } else if (this->isCached2(dirty)) {
5446  return mNode2->probeValueAndCache(ijk, v, *this);
5447  }
5448  return mRoot->probeValueAndCache(ijk, v, *this);
5449  }
5450 
5451  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
5452  {
5453 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5454  const CoordValueType dirty = this->computeDirty(ijk);
5455 #else
5456  auto&& dirty = ijk;
5457 #endif
5458  if (this->isCached1(dirty)) {
5459  return mNode1->probeLeafAndCache(ijk, *this);
5460  } else if (this->isCached2(dirty)) {
5461  return mNode2->probeLeafAndCache(ijk, *this);
5462  }
5463  return mRoot->probeLeafAndCache(ijk, *this);
5464  }
5465 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5466 
5467  template<typename RayT>
5468  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5469  {
5470 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5471  const CoordValueType dirty = this->computeDirty(ijk);
5472 #else
5473  auto&& dirty = ijk;
5474 #endif
5475  if (this->isCached1(dirty)) {
5476  return mNode1->getDimAndCache(ijk, ray, *this);
5477  } else if (this->isCached2(dirty)) {
5478  return mNode2->getDimAndCache(ijk, ray, *this);
5479  }
5480  return mRoot->getDimAndCache(ijk, ray, *this);
5481  }
5482 
5483  template<typename OpT, typename... ArgsT>
5484  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
5485  {
5486 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5487  const CoordValueType dirty = this->computeDirty(ijk);
5488 #else
5489  auto&& dirty = ijk;
5490 #endif
5491  if (this->isCached1(dirty)) {
5492  return mNode1->template getAndCache<OpT>(ijk, *this, args...);
5493  } else if (this->isCached2(dirty)) {
5494  return mNode2->template getAndCache<OpT>(ijk, *this, args...);
5495  }
5496  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5497  }
5498 
5499  template<typename OpT, typename... ArgsT>
5500  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
5501  {
5502 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5503  const CoordValueType dirty = this->computeDirty(ijk);
5504 #else
5505  auto&& dirty = ijk;
5506 #endif
5507  if (this->isCached1(dirty)) {
5508  return const_cast<Node1T*>(mNode1)->template setAndCache<OpT>(ijk, *this, args...);
5509  } else if (this->isCached2(dirty)) {
5510  return const_cast<Node2T*>(mNode2)->template setAndCache<OpT>(ijk, *this, args...);
5511  }
5512  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5513  }
5514 
5515 private:
5516  /// @brief Allow nodes to insert themselves into the cache.
5517  template<typename>
5518  friend class RootNode;
5519  template<typename, uint32_t>
5520  friend class InternalNode;
5521  template<typename, typename, template<uint32_t> class, uint32_t>
5522  friend class LeafNode;
5523 
5524  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5525  __hostdev__ void insert(const CoordType& ijk, const Node1T* node) const
5526  {
5527 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5528  mKey = ijk;
5529 #else
5530  mKeys[0] = ijk & ~Node1T::MASK;
5531 #endif
5532  mNode1 = node;
5533  }
5534  __hostdev__ void insert(const CoordType& ijk, const Node2T* node) const
5535  {
5536 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5537  mKey = ijk;
5538 #else
5539  mKeys[1] = ijk & ~Node2T::MASK;
5540 #endif
5541  mNode2 = node;
5542  }
5543  template<typename OtherNodeT>
5544  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5545 }; // ReadAccessor<BuildT, LEVEL0, LEVEL1>
5546 
5547 /// @brief Node caching at all (three) tree levels
5548 template<typename BuildT>
5549 class ReadAccessor<BuildT, 0, 1, 2>
5550 {
5551  using GridT = NanoGrid<BuildT>; // grid
5552  using TreeT = NanoTree<BuildT>;
5553  using RootT = NanoRoot<BuildT>; // root node
5554  using NodeT2 = NanoUpper<BuildT>; // upper internal node
5555  using NodeT1 = NanoLower<BuildT>; // lower internal node
5556  using LeafT = NanoLeaf<BuildT>; // Leaf node
5557  using CoordT = typename RootT::CoordType;
5558  using ValueT = typename RootT::ValueType;
5559 
5560  using FloatType = typename RootT::FloatType;
5561  using CoordValueType = typename RootT::CoordT::ValueType;
5562 
5563  // All member data are mutable to allow for access methods to be const
5564 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5565  mutable CoordT mKey; // 3*4 = 12 bytes
5566 #else // 68 bytes total
5567  mutable CoordT mKeys[3]; // 3*3*4 = 36 bytes
5568 #endif
5569  mutable const RootT* mRoot;
5570  mutable const void* mNode[3]; // 4*8 = 32 bytes
5571 
5572 public:
5573  using BuildType = BuildT;
5574  using ValueType = ValueT;
5575  using CoordType = CoordT;
5576 
5577  static const int CacheLevels = 3;
5578 #ifndef NANOVDB_NEW_ACCESSOR_METHODS
5579  using NodeInfo = typename ReadAccessor<ValueT, -1, -1, -1>::NodeInfo;
5580 #endif
5581  /// @brief Constructor from a root node
5583 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5584  : mKey(CoordType::max())
5585 #else
5587 #endif
5588  , mRoot(&root)
5589  , mNode{nullptr, nullptr, nullptr}
5590  {
5591  }
5592 
5593  /// @brief Constructor from a grid
5594  __hostdev__ ReadAccessor(const GridT& grid)
5595  : ReadAccessor(grid.tree().root())
5596  {
5597  }
5598 
5599  /// @brief Constructor from a tree
5601  : ReadAccessor(tree.root())
5602  {
5603  }
5604 
5605  __hostdev__ const RootT& root() const { return *mRoot; }
5606 
5607  /// @brief Defaults constructors
5608  ReadAccessor(const ReadAccessor&) = default;
5609  ~ReadAccessor() = default;
5610  ReadAccessor& operator=(const ReadAccessor&) = default;
5611 
5612  /// @brief Return a const point to the cached node of the specified type
5613  ///
5614  /// @warning The return value could be NULL.
5615  template<typename NodeT>
5616  __hostdev__ const NodeT* getNode() const
5617  {
5618  using T = typename NodeTrait<TreeT, NodeT::LEVEL>::type;
5619  static_assert(util::is_same<T, NodeT>::value, "ReadAccessor::getNode: Invalid node type");
5620  return reinterpret_cast<const T*>(mNode[NodeT::LEVEL]);
5621  }
5622 
5623  template<int LEVEL>
5625  {
5626  using T = typename NodeTrait<TreeT, LEVEL>::type;
5627  static_assert(LEVEL >= 0 && LEVEL <= 2, "ReadAccessor::getNode: Invalid node type");
5628  return reinterpret_cast<const T*>(mNode[LEVEL]);
5629  }
5630 
5631  /// @brief Reset this access to its initial state, i.e. with an empty cache
5633  {
5634 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5635  mKey = CoordType::max();
5636 #else
5637  mKeys[0] = mKeys[1] = mKeys[2] = CoordType::max();
5638 #endif
5639  mNode[0] = mNode[1] = mNode[2] = nullptr;
5640  }
5641 
5642 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5643  template<typename NodeT>
5644  __hostdev__ bool isCached(CoordValueType dirty) const
5645  {
5646  if (!mNode[NodeT::LEVEL])
5647  return false;
5648  if (dirty & int32_t(~NodeT::MASK)) {
5649  mNode[NodeT::LEVEL] = nullptr;
5650  return false;
5651  }
5652  return true;
5653  }
5654 
5655  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5656  {
5657  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5658  }
5659 #else
5660  template<typename NodeT>
5661  __hostdev__ bool isCached(const CoordType& ijk) const
5662  {
5663  return (ijk[0] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][0] &&
5664  (ijk[1] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][1] &&
5665  (ijk[2] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][2];
5666  }
5667 #endif
5668 
5669 #ifdef NANOVDB_NEW_ACCESSOR_METHODS
5671  {
5672  return this->template get<GetValue<BuildT>>(ijk);
5673  }
5674  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5675  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5676  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5677  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5678  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5679  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5680  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5681 #else // NANOVDB_NEW_ACCESSOR_METHODS
5682 
5683  __hostdev__ ValueType getValue(const CoordType& ijk) const
5684  {
5685 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5686  const CoordValueType dirty = this->computeDirty(ijk);
5687 #else
5688  auto&& dirty = ijk;
5689 #endif
5690  if (this->isCached<LeafT>(dirty)) {
5691  return ((LeafT*)mNode[0])->getValue(ijk);
5692  } else if (this->isCached<NodeT1>(dirty)) {
5693  return ((NodeT1*)mNode[1])->getValueAndCache(ijk, *this);
5694  } else if (this->isCached<NodeT2>(dirty)) {
5695  return ((NodeT2*)mNode[2])->getValueAndCache(ijk, *this);
5696  }
5697  return mRoot->getValueAndCache(ijk, *this);
5698  }
5699  __hostdev__ ValueType operator()(const CoordType& ijk) const
5700  {
5701  return this->getValue(ijk);
5702  }
5703  __hostdev__ ValueType operator()(int i, int j, int k) const
5704  {
5705  return this->getValue(CoordType(i, j, k));
5706  }
5707  __hostdev__ ValueType getValue(int i, int j, int k) const
5708  {
5709  return this->getValue(CoordType(i, j, k));
5710  }
5711 
5712  __hostdev__ NodeInfo getNodeInfo(const CoordType& ijk) const
5713  {
5714 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5715  const CoordValueType dirty = this->computeDirty(ijk);
5716 #else
5717  auto&& dirty = ijk;
5718 #endif
5719  if (this->isCached<LeafT>(dirty)) {
5720  return ((LeafT*)mNode[0])->getNodeInfoAndCache(ijk, *this);
5721  } else if (this->isCached<NodeT1>(dirty)) {
5722  return ((NodeT1*)mNode[1])->getNodeInfoAndCache(ijk, *this);
5723  } else if (this->isCached<NodeT2>(dirty)) {
5724  return ((NodeT2*)mNode[2])->getNodeInfoAndCache(ijk, *this);
5725  }
5726  return mRoot->getNodeInfoAndCache(ijk, *this);
5727  }
5728 
5729  __hostdev__ bool isActive(const CoordType& ijk) const
5730  {
5731 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5732  const CoordValueType dirty = this->computeDirty(ijk);
5733 #else
5734  auto&& dirty = ijk;
5735 #endif
5736  if (this->isCached<LeafT>(dirty)) {
5737  return ((LeafT*)mNode[0])->isActive(ijk);
5738  } else if (this->isCached<NodeT1>(dirty)) {
5739  return ((NodeT1*)mNode[1])->isActiveAndCache(ijk, *this);
5740  } else if (this->isCached<NodeT2>(dirty)) {
5741  return ((NodeT2*)mNode[2])->isActiveAndCache(ijk, *this);
5742  }
5743  return mRoot->isActiveAndCache(ijk, *this);
5744  }
5745 
5746  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const
5747  {
5748 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5749  const CoordValueType dirty = this->computeDirty(ijk);
5750 #else
5751  auto&& dirty = ijk;
5752 #endif
5753  if (this->isCached<LeafT>(dirty)) {
5754  return ((LeafT*)mNode[0])->probeValue(ijk, v);
5755  } else if (this->isCached<NodeT1>(dirty)) {
5756  return ((NodeT1*)mNode[1])->probeValueAndCache(ijk, v, *this);
5757  } else if (this->isCached<NodeT2>(dirty)) {
5758  return ((NodeT2*)mNode[2])->probeValueAndCache(ijk, v, *this);
5759  }
5760  return mRoot->probeValueAndCache(ijk, v, *this);
5761  }
5762  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const
5763  {
5764 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5765  const CoordValueType dirty = this->computeDirty(ijk);
5766 #else
5767  auto&& dirty = ijk;
5768 #endif
5769  if (this->isCached<LeafT>(dirty)) {
5770  return ((LeafT*)mNode[0]);
5771  } else if (this->isCached<NodeT1>(dirty)) {
5772  return ((NodeT1*)mNode[1])->probeLeafAndCache(ijk, *this);
5773  } else if (this->isCached<NodeT2>(dirty)) {
5774  return ((NodeT2*)mNode[2])->probeLeafAndCache(ijk, *this);
5775  }
5776  return mRoot->probeLeafAndCache(ijk, *this);
5777  }
5778 #endif // NANOVDB_NEW_ACCESSOR_METHODS
5779 
5780  template<typename OpT, typename... ArgsT>
5781  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
5782  {
5783 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5784  const CoordValueType dirty = this->computeDirty(ijk);
5785 #else
5786  auto&& dirty = ijk;
5787 #endif
5788  if (this->isCached<LeafT>(dirty)) {
5789  return ((const LeafT*)mNode[0])->template getAndCache<OpT>(ijk, *this, args...);
5790  } else if (this->isCached<NodeT1>(dirty)) {
5791  return ((const NodeT1*)mNode[1])->template getAndCache<OpT>(ijk, *this, args...);
5792  } else if (this->isCached<NodeT2>(dirty)) {
5793  return ((const NodeT2*)mNode[2])->template getAndCache<OpT>(ijk, *this, args...);
5794  }
5795  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5796  }
5797 
5798  template<typename OpT, typename... ArgsT>
5799  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
5800  {
5801 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5802  const CoordValueType dirty = this->computeDirty(ijk);
5803 #else
5804  auto&& dirty = ijk;
5805 #endif
5806  if (this->isCached<LeafT>(dirty)) {
5807  return ((LeafT*)mNode[0])->template setAndCache<OpT>(ijk, *this, args...);
5808  } else if (this->isCached<NodeT1>(dirty)) {
5809  return ((NodeT1*)mNode[1])->template setAndCache<OpT>(ijk, *this, args...);
5810  } else if (this->isCached<NodeT2>(dirty)) {
5811  return ((NodeT2*)mNode[2])->template setAndCache<OpT>(ijk, *this, args...);
5812  }
5813  return ((RootT*)mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5814  }
5815 
5816  template<typename RayT>
5817  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5818  {
5819 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5820  const CoordValueType dirty = this->computeDirty(ijk);
5821 #else
5822  auto&& dirty = ijk;
5823 #endif
5824  if (this->isCached<LeafT>(dirty)) {
5825  return ((LeafT*)mNode[0])->getDimAndCache(ijk, ray, *this);
5826  } else if (this->isCached<NodeT1>(dirty)) {
5827  return ((NodeT1*)mNode[1])->getDimAndCache(ijk, ray, *this);
5828  } else if (this->isCached<NodeT2>(dirty)) {
5829  return ((NodeT2*)mNode[2])->getDimAndCache(ijk, ray, *this);
5830  }
5831  return mRoot->getDimAndCache(ijk, ray, *this);
5832  }
5833 
5834 private:
5835  /// @brief Allow nodes to insert themselves into the cache.
5836  template<typename>
5837  friend class RootNode;
5838  template<typename, uint32_t>
5839  friend class InternalNode;
5840  template<typename, typename, template<uint32_t> class, uint32_t>
5841  friend class LeafNode;
5842 
5843  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5844  template<typename NodeT>
5845  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
5846  {
5847 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5848  mKey = ijk;
5849 #else
5850  mKeys[NodeT::LEVEL] = ijk & ~NodeT::MASK;
5851 #endif
5852  mNode[NodeT::LEVEL] = node;
5853  }
5854 }; // ReadAccessor<BuildT, 0, 1, 2>
5855 
5856 //////////////////////////////////////////////////
5857 
5858 /// @brief Free-standing function for convenient creation of a ReadAccessor with
5859 /// optional and customizable node caching.
5860 ///
5861 /// @details createAccessor<>(grid): No caching of nodes and hence it's thread-safe but slow
5862 /// createAccessor<0>(grid): Caching of leaf nodes only
5863 /// createAccessor<1>(grid): Caching of lower internal nodes only
5864 /// createAccessor<2>(grid): Caching of upper internal nodes only
5865 /// createAccessor<0,1>(grid): Caching of leaf and lower internal nodes
5866 /// createAccessor<0,2>(grid): Caching of leaf and upper internal nodes
5867 /// createAccessor<1,2>(grid): Caching of lower and upper internal nodes
5868 /// createAccessor<0,1,2>(grid): Caching of all nodes at all tree levels
5869 
5870 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5872 {
5874 }
5875 
5876 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5878 {
5880 }
5881 
5882 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5884 {
5886 }
5887 
5888 //////////////////////////////////////////////////
5889 
5890 /// @brief This is a convenient class that allows for access to grid meta-data
5891 /// that are independent of the value type of a grid. That is, this class
5892 /// can be used to get information about a grid without actually knowing
5893 /// its ValueType.
5895 { // 768 bytes (32 byte aligned)
5896  GridData mGridData; // 672B
5897  TreeData mTreeData; // 64B
5898  CoordBBox mIndexBBox; // 24B. AABB of active values in index space.
5899  uint32_t mRootTableSize, mPadding{0}; // 8B
5900 
5901 public:
5902  template<typename T>
5904  {
5905  mGridData = *grid.data();
5906  mTreeData = *grid.tree().data();
5907  mIndexBBox = grid.indexBBox();
5908  mRootTableSize = grid.tree().root().getTableSize();
5909  }
5910  GridMetaData(const GridData* gridData)
5911  {
5912  if (GridMetaData::safeCast(gridData)) {
5913  *this = *reinterpret_cast<const GridMetaData*>(gridData);
5914  //util::memcpy(this, (const GridMetaData*)gridData);
5915  } else {// otherwise copy each member individually
5916  mGridData = *gridData;
5917  mTreeData = *reinterpret_cast<const TreeData*>(gridData->treePtr());
5918  mIndexBBox = gridData->indexBBox();
5919  mRootTableSize = gridData->rootTableSize();
5920  }
5921  }
5922  GridMetaData& operator=(const GridMetaData&) = default;
5923  /// @brief return true if the RootData follows right after the TreeData.
5924  /// If so, this implies that it's safe to cast the grid from which
5925  /// this instance was constructed to a GridMetaData
5926  __hostdev__ bool safeCast() const { return mTreeData.isRootNext(); }
5927 
5928  /// @brief return true if it is safe to cast the grid to a pointer
5929  /// of type GridMetaData, i.e. construction can be avoided.
5930  __hostdev__ static bool safeCast(const GridData *gridData){
5931  NANOVDB_ASSERT(gridData && gridData->isValid());
5932  return gridData->isRootConnected();
5933  }
5934  /// @brief return true if it is safe to cast the grid to a pointer
5935  /// of type GridMetaData, i.e. construction can be avoided.
5936  template<typename T>
5937  __hostdev__ static bool safeCast(const NanoGrid<T>& grid){return grid.tree().isRootNext();}
5938  __hostdev__ bool isValid() const { return mGridData.isValid(); }
5939  __hostdev__ const GridType& gridType() const { return mGridData.mGridType; }
5940  __hostdev__ const GridClass& gridClass() const { return mGridData.mGridClass; }
5941  __hostdev__ bool isLevelSet() const { return mGridData.mGridClass == GridClass::LevelSet; }
5942  __hostdev__ bool isFogVolume() const { return mGridData.mGridClass == GridClass::FogVolume; }
5943  __hostdev__ bool isStaggered() const { return mGridData.mGridClass == GridClass::Staggered; }
5944  __hostdev__ bool isPointIndex() const { return mGridData.mGridClass == GridClass::PointIndex; }
5945  __hostdev__ bool isGridIndex() const { return mGridData.mGridClass == GridClass::IndexGrid; }
5946  __hostdev__ bool isPointData() const { return mGridData.mGridClass == GridClass::PointData; }
5947  __hostdev__ bool isMask() const { return mGridData.mGridClass == GridClass::Topology; }
5948  __hostdev__ bool isUnknown() const { return mGridData.mGridClass == GridClass::Unknown; }
5949  __hostdev__ bool hasMinMax() const { return mGridData.mFlags.isMaskOn(GridFlags::HasMinMax); }
5950  __hostdev__ bool hasBBox() const { return mGridData.mFlags.isMaskOn(GridFlags::HasBBox); }
5951  __hostdev__ bool hasLongGridName() const { return mGridData.mFlags.isMaskOn(GridFlags::HasLongGridName); }
5952  __hostdev__ bool hasAverage() const { return mGridData.mFlags.isMaskOn(GridFlags::HasAverage); }
5953  __hostdev__ bool hasStdDeviation() const { return mGridData.mFlags.isMaskOn(GridFlags::HasStdDeviation); }
5954  __hostdev__ bool isBreadthFirst() const { return mGridData.mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
5955  __hostdev__ uint64_t gridSize() const { return mGridData.mGridSize; }
5956  __hostdev__ uint32_t gridIndex() const { return mGridData.mGridIndex; }
5957  __hostdev__ uint32_t gridCount() const { return mGridData.mGridCount; }
5958  __hostdev__ const char* shortGridName() const { return mGridData.mGridName; }
5959  __hostdev__ const Map& map() const { return mGridData.mMap; }
5960  __hostdev__ const Vec3dBBox& worldBBox() const { return mGridData.mWorldBBox; }
5961  __hostdev__ const CoordBBox& indexBBox() const { return mIndexBBox; }
5962  __hostdev__ Vec3d voxelSize() const { return mGridData.mVoxelSize; }
5963  __hostdev__ int blindDataCount() const { return mGridData.mBlindMetadataCount; }
5964  __hostdev__ uint64_t activeVoxelCount() const { return mTreeData.mVoxelCount; }
5965  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const { return mTreeData.mTileCount[level - 1]; }
5966  __hostdev__ uint32_t nodeCount(uint32_t level) const { return mTreeData.mNodeCount[level]; }
5967  __hostdev__ const Checksum& checksum() const { return mGridData.mChecksum; }
5968  __hostdev__ uint32_t rootTableSize() const { return mRootTableSize; }
5969  __hostdev__ bool isEmpty() const { return mRootTableSize == 0; }
5970  __hostdev__ Version version() const { return mGridData.mVersion; }
5971 }; // GridMetaData
5972 
5973 /// @brief Class to access points at a specific voxel location
5974 ///
5975 /// @note If GridClass::PointIndex AttT should be uint32_t and if GridClass::PointData Vec3f
5976 template<typename AttT, typename BuildT = uint32_t>
5977 class PointAccessor : public DefaultReadAccessor<BuildT>
5978 {
5979  using AccT = DefaultReadAccessor<BuildT>;
5980  const NanoGrid<BuildT>& mGrid;
5981  const AttT* mData;
5982 
5983 public:
5985  : AccT(grid.tree().root())
5986  , mGrid(grid)
5987  , mData(grid.template getBlindData<AttT>(0))
5988  {
5989  NANOVDB_ASSERT(grid.gridType() == toGridType<BuildT>());
5992  }
5993 
5994  /// @brief return true if this access was initialized correctly
5995  __hostdev__ operator bool() const { return mData != nullptr; }
5996 
5997  __hostdev__ const NanoGrid<BuildT>& grid() const { return mGrid; }
5998 
5999  /// @brief Return the total number of point in the grid and set the
6000  /// iterators to the complete range of points.
6001  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
6002  {
6003  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
6004  begin = mData;
6005  end = begin + count;
6006  return count;
6007  }
6008  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
6009  /// If this return value is larger than zero then the iterators @a begin and @a end
6010  /// will point to all the attributes contained within that leaf node.
6011  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
6012  {
6013  auto* leaf = this->probeLeaf(ijk);
6014  if (leaf == nullptr) {
6015  return 0;
6016  }
6017  begin = mData + leaf->minimum();
6018  end = begin + leaf->maximum();
6019  return leaf->maximum();
6020  }
6021 
6022  /// @brief get iterators over attributes to points at a specific voxel location
6023  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
6024  {
6025  begin = end = nullptr;
6026  if (auto* leaf = this->probeLeaf(ijk)) {
6027  const uint32_t offset = NanoLeaf<BuildT>::CoordToOffset(ijk);
6028  if (leaf->isActive(offset)) {
6029  begin = mData + leaf->minimum();
6030  end = begin + leaf->getValue(offset);
6031  if (offset > 0u)
6032  begin += leaf->getValue(offset - 1);
6033  }
6034  }
6035  return end - begin;
6036  }
6037 }; // PointAccessor
6038 
6039 template<typename AttT>
6040 class PointAccessor<AttT, Point> : public DefaultReadAccessor<Point>
6041 {
6042  using AccT = DefaultReadAccessor<Point>;
6043  const NanoGrid<Point>& mGrid;
6044  const AttT* mData;
6045 
6046 public:
6048  : AccT(grid.tree().root())
6049  , mGrid(grid)
6050  , mData(grid.template getBlindData<AttT>(0))
6051  {
6052  NANOVDB_ASSERT(mData);
6059  }
6060 
6061  /// @brief return true if this access was initialized correctly
6062  __hostdev__ operator bool() const { return mData != nullptr; }
6063 
6064  __hostdev__ const NanoGrid<Point>& grid() const { return mGrid; }
6065 
6066  /// @brief Return the total number of point in the grid and set the
6067  /// iterators to the complete range of points.
6068  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
6069  {
6070  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
6071  begin = mData;
6072  end = begin + count;
6073  return count;
6074  }
6075  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
6076  /// If this return value is larger than zero then the iterators @a begin and @a end
6077  /// will point to all the attributes contained within that leaf node.
6078  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
6079  {
6080  auto* leaf = this->probeLeaf(ijk);
6081  if (leaf == nullptr)
6082  return 0;
6083  begin = mData + leaf->offset();
6084  end = begin + leaf->pointCount();
6085  return leaf->pointCount();
6086  }
6087 
6088  /// @brief get iterators over attributes to points at a specific voxel location
6089  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
6090  {
6091  if (auto* leaf = this->probeLeaf(ijk)) {
6092  const uint32_t n = NanoLeaf<Point>::CoordToOffset(ijk);
6093  if (leaf->isActive(n)) {
6094  begin = mData + leaf->first(n);
6095  end = mData + leaf->last(n);
6096  return end - begin;
6097  }
6098  }
6099  begin = end = nullptr;
6100  return 0u; // no leaf or inactive voxel
6101  }
6102 }; // PointAccessor<AttT, Point>
6103 
6104 /// @brief Class to access values in channels at a specific voxel location.
6105 ///
6106 /// @note The ChannelT template parameter can be either const and non-const.
6107 template<typename ChannelT, typename IndexT = ValueIndex>
6108 class ChannelAccessor : public DefaultReadAccessor<IndexT>
6109 {
6110  static_assert(BuildTraits<IndexT>::is_index, "Expected an index build type");
6112 
6113  const NanoGrid<IndexT>& mGrid;
6114  ChannelT* mChannel;
6115 
6116 public:
6117  using ValueType = ChannelT;
6120 
6121  /// @brief Ctor from an IndexGrid and an integer ID of an internal channel
6122  /// that is assumed to exist as blind data in the IndexGrid.
6123  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, uint32_t channelID = 0u)
6124  : BaseT(grid.tree().root())
6125  , mGrid(grid)
6126  , mChannel(nullptr)
6127  {
6128  NANOVDB_ASSERT(isIndex(grid.gridType()));
6130  this->setChannel(channelID);
6131  }
6132 
6133  /// @brief Ctor from an IndexGrid and an external channel
6134  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, ChannelT* channelPtr)
6135  : BaseT(grid.tree().root())
6136  , mGrid(grid)
6137  , mChannel(channelPtr)
6138  {
6139  NANOVDB_ASSERT(isIndex(grid.gridType()));
6141  }
6142 
6143  /// @brief return true if this access was initialized correctly
6144  __hostdev__ operator bool() const { return mChannel != nullptr; }
6145 
6146  /// @brief Return a const reference to the IndexGrid
6147  __hostdev__ const NanoGrid<IndexT>& grid() const { return mGrid; }
6148 
6149  /// @brief Return a const reference to the tree of the IndexGrid
6150  __hostdev__ const TreeType& tree() const { return mGrid.tree(); }
6151 
6152  /// @brief Return a vector of the axial voxel sizes
6153  __hostdev__ const Vec3d& voxelSize() const { return mGrid.voxelSize(); }
6154 
6155  /// @brief Return total number of values indexed by the IndexGrid
6156  __hostdev__ const uint64_t& valueCount() const { return mGrid.valueCount(); }
6157 
6158  /// @brief Change to an external channel
6159  /// @return Pointer to channel data
6160  __hostdev__ ChannelT* setChannel(ChannelT* channelPtr) {return mChannel = channelPtr;}
6161 
6162  /// @brief Change to an internal channel, assuming it exists as as blind data
6163  /// in the IndexGrid.
6164  /// @return Pointer to channel data, which could be NULL if channelID is out of range or
6165  /// if ChannelT does not match the value type of the blind data
6166  __hostdev__ ChannelT* setChannel(uint32_t channelID)
6167  {
6168  return mChannel = const_cast<ChannelT*>(mGrid.template getBlindData<ChannelT>(channelID));
6169  }
6170 
6171  /// @brief Return the linear offset into a channel that maps to the specified coordinate
6172  __hostdev__ uint64_t getIndex(const math::Coord& ijk) const { return BaseT::getValue(ijk); }
6173  __hostdev__ uint64_t idx(int i, int j, int k) const { return BaseT::getValue(math::Coord(i, j, k)); }
6174 
6175  /// @brief Return the value from a cached channel that maps to the specified coordinate
6176  __hostdev__ ChannelT& getValue(const math::Coord& ijk) const { return mChannel[BaseT::getValue(ijk)]; }
6177  __hostdev__ ChannelT& operator()(const math::Coord& ijk) const { return this->getValue(ijk); }
6178  __hostdev__ ChannelT& operator()(int i, int j, int k) const { return this->getValue(math::Coord(i, j, k)); }
6179 
6180  /// @brief return the state and updates the value of the specified voxel
6181  __hostdev__ bool probeValue(const math::Coord& ijk, typename util::remove_const<ChannelT>::type& v) const
6182  {
6183  uint64_t idx;
6184  const bool isActive = BaseT::probeValue(ijk, idx);
6185  v = mChannel[idx];
6186  return isActive;
6187  }
6188  /// @brief Return the value from a specified channel that maps to the specified coordinate
6189  ///
6190  /// @note The template parameter can be either const or non-const
6191  template<typename T>
6192  __hostdev__ T& getValue(const math::Coord& ijk, T* channelPtr) const { return channelPtr[BaseT::getValue(ijk)]; }
6193 
6194 }; // ChannelAccessor
6195 
6196 #if 0
6197 // This MiniGridHandle class is only included as a stand-alone example. Note that aligned_alloc is a C++17 feature!
6198 // Normally we recommend using GridHandle defined in util/GridHandle.h but this minimal implementation could be an
6199 // alternative when using the IO methods defined below.
6200 struct MiniGridHandle {
6201  struct BufferType {
6202  uint8_t *data;
6203  uint64_t size;
6204  BufferType(uint64_t n=0) : data(std::aligned_alloc(NANOVDB_DATA_ALIGNMENT, n)), size(n) {assert(isValid(data));}
6205  BufferType(BufferType &&other) : data(other.data), size(other.size) {other.data=nullptr; other.size=0;}
6206  ~BufferType() {std::free(data);}
6207  BufferType& operator=(const BufferType &other) = delete;
6208  BufferType& operator=(BufferType &&other){data=other.data; size=other.size; other.data=nullptr; other.size=0; return *this;}
6209  static BufferType create(size_t n, BufferType* dummy = nullptr) {return BufferType(n);}
6210  } buffer;
6211  MiniGridHandle(BufferType &&buf) : buffer(std::move(buf)) {}
6212  const uint8_t* data() const {return buffer.data;}
6213 };// MiniGridHandle
6214 #endif
6215 
6216 namespace io {
6217 
6218 /// @brief Define compression codecs
6219 ///
6220 /// @note NONE is the default, ZIP is slow but compact and BLOSC offers a great balance.
6221 ///
6222 /// @throw NanoVDB optionally supports ZIP and BLOSC compression and will throw an exception
6223 /// if its support is required but missing.
6224 enum class Codec : uint16_t { NONE = 0,
6225  ZIP = 1,
6226  BLOSC = 2,
6227  End = 3,
6228  StrLen = 6 + End };
6229 
6230 __hostdev__ inline const char* toStr(char *dst, Codec codec)
6231 {
6232  switch (codec){
6233  case Codec::NONE: return util::strcpy(dst, "NONE");
6234  case Codec::ZIP: return util::strcpy(dst, "ZIP");
6235  case Codec::BLOSC : return util::strcpy(dst, "BLOSC");
6236  default: return util::strcpy(dst, "END");
6237  }
6238 }
6239 
6240 __hostdev__ inline Codec toCodec(const char *str)
6241 {
6242  if (util::streq(str, "none")) return Codec::NONE;
6243  if (util::streq(str, "zip")) return Codec::ZIP;
6244  if (util::streq(str, "blosc")) return Codec::BLOSC;
6245  return Codec::End;
6246 }
6247 
6248 /// @brief Data encoded at the head of each segment of a file or stream.
6249 ///
6250 /// @note A file or stream is composed of one or more segments that each contain
6251 // one or more grids.
6252 struct FileHeader {// 16 bytes
6253  uint64_t magic;// 8 bytes
6254  Version version;// 4 bytes version numbers
6255  uint16_t gridCount;// 2 bytes
6256  Codec codec;// 2 bytes
6257  bool isValid() const {return magic == NANOVDB_MAGIC_NUMB || magic == NANOVDB_MAGIC_FILE;}
6258 }; // FileHeader ( 16 bytes = 2 words )
6259 
6260 // @brief Data encoded for each of the grids associated with a segment.
6261 // Grid size in memory (uint64_t) |
6262 // Grid size on disk (uint64_t) |
6263 // Grid name hash key (uint64_t) |
6264 // Numer of active voxels (uint64_t) |
6265 // Grid type (uint32_t) |
6266 // Grid class (uint32_t) |
6267 // Characters in grid name (uint32_t) |
6268 // AABB in world space (2*3*double) | one per grid in file
6269 // AABB in index space (2*3*int) |
6270 // Size of a voxel in world units (3*double) |
6271 // Byte size of the grid name (uint32_t) |
6272 // Number of nodes per level (4*uint32_t) |
6273 // Numer of active tiles per level (3*uint32_t) |
6274 // Codec for file compression (uint16_t) |
6275 // Padding due to 8B alignment (uint16_t) |
6276 // Version number (uint32_t) |
6278 {// 176 bytes
6279  uint64_t gridSize, fileSize, nameKey, voxelCount; // 4 * 8 = 32B.
6282  Vec3dBBox worldBBox; // 2 * 3 * 8 = 48B.
6283  CoordBBox indexBBox; // 2 * 3 * 4 = 24B.
6284  Vec3d voxelSize; // 24B.
6285  uint32_t nameSize; // 4B.
6286  uint32_t nodeCount[4]; //4 x 4 = 16B
6287  uint32_t tileCount[3];// 3 x 4 = 12B
6288  Codec codec; // 2B
6289  uint16_t padding;// 2B, due to 8B alignment from uint64_t
6291 }; // FileMetaData
6292 
6293 // the following code block uses std and therefore needs to be ignored by CUDA and HIP
6294 #if !defined(__CUDA_ARCH__) && !defined(__HIP__)
6295 
6296 // Note that starting with version 32.6.0 it is possible to write and read raw grid buffers to
6297 // files, e.g. os.write((const char*)&buffer.data(), buffer.size()) or more conveniently as
6298 // handle.write(fileName). In addition to this simple approach we offer the methods below to
6299 // write traditional uncompressed nanovdb files that unlike raw files include metadata that
6300 // is used for tools like nanovdb_print.
6301 
6302 ///
6303 /// @brief This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h
6304 /// Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also
6305 /// works if client code only includes PNanoVDB.h!
6306 ///
6307 /// @details Writes a raw NanoVDB buffer, possibly with multiple grids, to a stream WITHOUT compression.
6308 /// It follows all the conventions in util/IO.h so the stream can be read by all existing client
6309 /// code of NanoVDB.
6310 ///
6311 /// @note This method will always write uncompressed grids to the stream, i.e. Blosc or ZIP compression
6312 /// is never applied! This is a fundamental limitation and feature of this standalone function.
6313 ///
6314 /// @throw std::invalid_argument if buffer does not point to a valid NanoVDB grid.
6315 ///
6316 /// @warning This is pretty ugly code that involves lots of pointer and bit manipulations - not for the faint of heart :)
6317 template<typename StreamT> // StreamT class must support: "void write(const char*, size_t)"
6318 void writeUncompressedGrid(StreamT& os, const GridData* gridData, bool raw = false)
6319 {
6320  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMB || gridData->mMagic == NANOVDB_MAGIC_GRID);
6321  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
6322  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
6323 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
6324  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u, Codec::NONE};
6325 #else
6326  FileHeader head{NANOVDB_MAGIC_NUMB, gridData->mVersion, 1u, Codec::NONE};
6327 #endif
6328  const char* gridName = gridData->gridName();
6329  const uint32_t nameSize = util::strlen(gridName) + 1;// include '\0'
6330  const TreeData* treeData = (const TreeData*)(gridData->treePtr());
6331  FileMetaData meta{gridData->mGridSize, gridData->mGridSize, 0u, treeData->mVoxelCount,
6332  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
6333  treeData->bbox(), gridData->mVoxelSize, nameSize,
6334  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
6335  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
6336  Codec::NONE, 0u, gridData->mVersion }; // FileMetaData
6337  os.write((const char*)&head, sizeof(FileHeader)); // write header
6338  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
6339  os.write(gridName, nameSize); // write grid name
6340  }
6341  os.write((const char*)gridData, gridData->mGridSize);// write the grid
6342 }// writeUncompressedGrid
6343 
6344 /// @brief write multiple NanoVDB grids to a single file, without compression.
6345 /// @note To write all grids in a single GridHandle simply use handle.write("fieNane")
6346 template<typename GridHandleT, template<typename...> class VecT>
6347 void writeUncompressedGrids(const char* fileName, const VecT<GridHandleT>& handles, bool raw = false)
6348 {
6349 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ofstream or FILE implementations
6350  std::ofstream os(fileName, std::ios::out | std::ios::binary | std::ios::trunc);
6351 #else
6352  struct StreamT {
6353  FILE* fptr;
6354  StreamT(const char* name) { fptr = fopen(name, "wb"); }
6355  ~StreamT() { fclose(fptr); }
6356  void write(const char* data, size_t n) { fwrite(data, 1, n, fptr); }
6357  bool is_open() const { return fptr != NULL; }
6358  } os(fileName);
6359 #endif
6360  if (!os.is_open()) {
6361  fprintf(stderr, "nanovdb::writeUncompressedGrids: Unable to open file \"%s\"for output\n", fileName);
6362  exit(EXIT_FAILURE);
6363  }
6364  for (auto& h : handles) {
6365  for (uint32_t n=0; n<h.gridCount(); ++n) writeUncompressedGrid(os, h.gridData(n), raw);
6366  }
6367 } // writeUncompressedGrids
6368 
6369 /// @brief read all uncompressed grids from a stream and return their handles.
6370 ///
6371 /// @throw std::invalid_argument if stream does not contain a single uncompressed valid NanoVDB grid
6372 ///
6373 /// @details StreamT class must support: "bool read(char*, size_t)" and "void skip(uint32_t)"
6374 template<typename GridHandleT, typename StreamT, template<typename...> class VecT>
6375 VecT<GridHandleT> readUncompressedGrids(StreamT& is, const typename GridHandleT::BufferType& pool = typename GridHandleT::BufferType())
6376 {
6377  VecT<GridHandleT> handles;
6378  GridData data;
6379  is.read((char*)&data, sizeof(GridData));
6380  if (data.isValid()) {// stream contains a raw grid buffer
6381  uint64_t size = data.mGridSize, sum = 0u;
6382  while(data.mGridIndex + 1u < data.mGridCount) {
6383  is.skip(data.mGridSize - sizeof(GridData));// skip grid
6384  is.read((char*)&data, sizeof(GridData));// read sizeof(GridData) bytes
6385  sum += data.mGridSize;
6386  }
6387  is.skip(-int64_t(sum + sizeof(GridData)));// rewind to start
6388  auto buffer = GridHandleT::BufferType::create(size + sum, &pool);
6389  is.read((char*)(buffer.data()), buffer.size());
6390  handles.emplace_back(std::move(buffer));
6391  } else {// Header0, MetaData0, gridName0, Grid0...HeaderN, MetaDataN, gridNameN, GridN
6392  is.skip(-sizeof(GridData));// rewind
6393  FileHeader head;
6394  while(is.read((char*)&head, sizeof(FileHeader))) {
6395  if (!head.isValid()) {
6396  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid magic number = \"%s\"\n", (const char*)&(head.magic));
6397  exit(EXIT_FAILURE);
6398  } else if (!head.version.isCompatible()) {
6399  char str[20];
6400  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid major version = \"%s\"\n", toStr(str, head.version));
6401  exit(EXIT_FAILURE);
6402  } else if (head.codec != Codec::NONE) {
6403  char str[8];
6404  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid codec = \"%s\"\n", toStr(str, head.codec));
6405  exit(EXIT_FAILURE);
6406  }
6407  FileMetaData meta;
6408  for (uint16_t i = 0; i < head.gridCount; ++i) { // read all grids in segment
6409  is.read((char*)&meta, sizeof(FileMetaData));// read meta data
6410  is.skip(meta.nameSize); // skip grid name
6411  auto buffer = GridHandleT::BufferType::create(meta.gridSize, &pool);
6412  is.read((char*)buffer.data(), meta.gridSize);// read grid
6413  handles.emplace_back(std::move(buffer));
6414  }// loop over grids in segment
6415  }// loop over segments
6416  }
6417  return handles;
6418 } // readUncompressedGrids
6419 
6420 /// @brief Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
6421 template<typename GridHandleT, template<typename...> class VecT>
6422 VecT<GridHandleT> readUncompressedGrids(const char* fileName, const typename GridHandleT::BufferType& buffer = typename GridHandleT::BufferType())
6423 {
6424 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ifstream or FILE implementations
6425  struct StreamT : public std::ifstream {
6426  StreamT(const char* name) : std::ifstream(name, std::ios::in | std::ios::binary){}
6427  void skip(int64_t off) { this->seekg(off, std::ios_base::cur); }
6428  };
6429 #else
6430  struct StreamT {
6431  FILE* fptr;
6432  StreamT(const char* name) { fptr = fopen(name, "rb"); }
6433  ~StreamT() { fclose(fptr); }
6434  bool read(char* data, size_t n) {
6435  size_t m = fread(data, 1, n, fptr);
6436  return n == m;
6437  }
6438  void skip(int64_t off) { fseek(fptr, (long int)off, SEEK_CUR); }
6439  bool is_open() const { return fptr != NULL; }
6440  };
6441 #endif
6442  StreamT is(fileName);
6443  if (!is.is_open()) {
6444  fprintf(stderr, "nanovdb::readUncompressedGrids: Unable to open file \"%s\"for input\n", fileName);
6445  exit(EXIT_FAILURE);
6446  }
6447  return readUncompressedGrids<GridHandleT, StreamT, VecT>(is, buffer);
6448 } // readUncompressedGrids
6449 
6450 #endif // if !defined(__CUDA_ARCH__) && !defined(__HIP__)
6451 
6452 } // namespace io
6453 
6454 // ----------------------------> Implementations of random access methods <--------------------------------------
6455 
6456 /// @brief Implements Tree::getValue(math::Coord), i.e. return the value associated with a specific coordinate @c ijk.
6457 /// @tparam BuildT Build type of the grid being called
6458 /// @details The value at a coordinate maps to the background, a tile value or a leaf value.
6459 template<typename BuildT>
6460 struct GetValue
6461 {
6462  __hostdev__ static auto get(const NanoRoot<BuildT>& root) { return root.mBackground; }
6463  __hostdev__ static auto get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.value; }
6464  __hostdev__ static auto get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6465  __hostdev__ static auto get(const NanoLower<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6466  __hostdev__ static auto get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.getValue(n); } // works with all build types
6467 }; // GetValue<BuildT>
6468 
6469 template<typename BuildT>
6470 struct SetValue
6471 {
6472  static_assert(!BuildTraits<BuildT>::is_special, "SetValue does not support special value types");
6473  using ValueT = typename NanoLeaf<BuildT>::ValueType;
6474  __hostdev__ static auto set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6475  __hostdev__ static auto set(typename NanoRoot<BuildT>::Tile& tile, const ValueT& v) { tile.value = v; }
6476  __hostdev__ static auto set(NanoUpper<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6477  __hostdev__ static auto set(NanoLower<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6478  __hostdev__ static auto set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6479 }; // SetValue<BuildT>
6480 
6481 template<typename BuildT>
6482 struct SetVoxel
6483 {
6484  static_assert(!BuildTraits<BuildT>::is_special, "SetVoxel does not support special value types");
6485  using ValueT = typename NanoLeaf<BuildT>::ValueType;
6486  __hostdev__ static auto set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6487  __hostdev__ static auto set(typename NanoRoot<BuildT>::Tile&, const ValueT&) {} // no-op
6488  __hostdev__ static auto set(NanoUpper<BuildT>&, uint32_t, const ValueT&) {} // no-op
6489  __hostdev__ static auto set(NanoLower<BuildT>&, uint32_t, const ValueT&) {} // no-op
6490  __hostdev__ static auto set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6491 }; // SetVoxel<BuildT>
6492 
6493 /// @brief Implements Tree::isActive(math::Coord)
6494 /// @tparam BuildT Build type of the grid being called
6495 template<typename BuildT>
6496 struct GetState
6497 {
6498  __hostdev__ static auto get(const NanoRoot<BuildT>&) { return false; }
6499  __hostdev__ static auto get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.state > 0; }
6500  __hostdev__ static auto get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6501  __hostdev__ static auto get(const NanoLower<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6502  __hostdev__ static auto get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.mValueMask.isOn(n); }
6503 }; // GetState<BuildT>
6504 
6505 /// @brief Implements Tree::getDim(math::Coord)
6506 /// @tparam BuildT Build type of the grid being called
6507 template<typename BuildT>
6508 struct GetDim
6509 {
6510  __hostdev__ static uint32_t get(const NanoRoot<BuildT>&) { return 0u; } // background
6511  __hostdev__ static uint32_t get(const typename NanoRoot<BuildT>::Tile&) { return 4096u; }
6512  __hostdev__ static uint32_t get(const NanoUpper<BuildT>&, uint32_t) { return 128u; }
6513  __hostdev__ static uint32_t get(const NanoLower<BuildT>&, uint32_t) { return 8u; }
6514  __hostdev__ static uint32_t get(const NanoLeaf<BuildT>&, uint32_t) { return 1u; }
6515 }; // GetDim<BuildT>
6516 
6517 /// @brief Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord)
6518 /// @tparam BuildT Build type of the grid being called
6519 template<typename BuildT>
6520 struct GetLeaf
6521 {
6522  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
6523  __hostdev__ static const NanoLeaf<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6524  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6525  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoLower<BuildT>&, uint32_t) { return nullptr; }
6526  __hostdev__ static const NanoLeaf<BuildT>* get(const NanoLeaf<BuildT>& leaf, uint32_t) { return &leaf; }
6527 }; // GetLeaf<BuildT>
6528 
6529 /// @brief Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates
6530 /// @tparam BuildT Build type of the grid being called
6531 template<typename BuildT>
6532 struct GetLower
6533 {
6534  __hostdev__ static const NanoLower<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
6535  __hostdev__ static const NanoLower<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6536  __hostdev__ static const NanoLower<BuildT>* get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6537  __hostdev__ static const NanoLower<BuildT>* get(const NanoLower<BuildT>& node, uint32_t) { return &node; }
6538  __hostdev__ static const NanoLower<BuildT>* get(const NanoLeaf<BuildT>&, uint32_t) { return nullptr; }
6539 }; // GetLower<BuildT>
6540 
6541 /// @brief Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates
6542 /// @tparam BuildT Build type of the grid being called
6543 template<typename BuildT>
6544 struct GetUpper
6545 {
6546  __hostdev__ static const NanoUpper<BuildT>* get(const NanoRoot<BuildT>&) { return nullptr; }
6547  __hostdev__ static const NanoUpper<BuildT>* get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6548  __hostdev__ static const NanoUpper<BuildT>* get(const NanoUpper<BuildT>& node, uint32_t) { return &node; }
6549  __hostdev__ static const NanoUpper<BuildT>* get(const NanoLower<BuildT>& node, uint32_t) { return nullptr; }
6550  __hostdev__ static const NanoUpper<BuildT>* get(const NanoLeaf<BuildT>&, uint32_t) { return nullptr; }
6551 }; // GetUpper<BuildT>
6552 
6553 /// @brief Implements Tree::probeLeaf(math::Coord)
6554 /// @tparam BuildT Build type of the grid being called
6555 template<typename BuildT>
6556 struct ProbeValue
6557 {
6558  using ValueT = typename BuildToValueMap<BuildT>::Type;
6559  __hostdev__ static bool get(const NanoRoot<BuildT>& root, ValueT& v)
6560  {
6561  v = root.mBackground;
6562  return false;
6563  }
6564  __hostdev__ static bool get(const typename NanoRoot<BuildT>::Tile& tile, ValueT& v)
6565  {
6566  v = tile.value;
6567  return tile.state > 0u;
6568  }
6569  __hostdev__ static bool get(const NanoUpper<BuildT>& node, uint32_t n, ValueT& v)
6570  {
6571  v = node.mTable[n].value;
6572  return node.mValueMask.isOn(n);
6573  }
6574  __hostdev__ static bool get(const NanoLower<BuildT>& node, uint32_t n, ValueT& v)
6575  {
6576  v = node.mTable[n].value;
6577  return node.mValueMask.isOn(n);
6578  }
6579  __hostdev__ static bool get(const NanoLeaf<BuildT>& leaf, uint32_t n, ValueT& v)
6580  {
6581  v = leaf.getValue(n);
6582  return leaf.mValueMask.isOn(n);
6583  }
6584 }; // ProbeValue<BuildT>
6585 
6586 /// @brief Implements Tree::getNodeInfo(math::Coord)
6587 /// @tparam BuildT Build type of the grid being called
6588 template<typename BuildT>
6589 struct GetNodeInfo
6590 {
6593  struct NodeInfo
6594  {
6595  uint32_t level, dim;
6596  ValueType minimum, maximum;
6597  FloatType average, stdDevi;
6598  CoordBBox bbox;
6599  };
6600  __hostdev__ static NodeInfo get(const NanoRoot<BuildT>& root)
6601  {
6602  return NodeInfo{3u, NanoUpper<BuildT>::DIM, root.minimum(), root.maximum(), root.average(), root.stdDeviation(), root.bbox()};
6603  }
6604  __hostdev__ static NodeInfo get(const typename NanoRoot<BuildT>::Tile& tile)
6605  {
6606  return NodeInfo{3u, NanoUpper<BuildT>::DIM, tile.value, tile.value, static_cast<FloatType>(tile.value), 0, CoordBBox::createCube(tile.origin(), NanoUpper<BuildT>::DIM)};
6607  }
6608  __hostdev__ static NodeInfo get(const NanoUpper<BuildT>& node, uint32_t n)
6609  {
6610  return NodeInfo{2u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6611  }
6612  __hostdev__ static NodeInfo get(const NanoLower<BuildT>& node, uint32_t n)
6613  {
6614  return NodeInfo{1u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6615  }
6616  __hostdev__ static NodeInfo get(const NanoLeaf<BuildT>& leaf, uint32_t n)
6617  {
6618  return NodeInfo{0u, leaf.dim(), leaf.minimum(), leaf.maximum(), leaf.average(), leaf.stdDeviation(), leaf.bbox()};
6619  }
6620 }; // GetNodeInfo<BuildT>
6621 
6622 } // namespace nanovdb ===================================================================
6623 
6624 #endif // end of NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
typename FloatTraits< BuildT >::FloatType FloatType
Definition: NanoVDB.h:3745
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:3780
__hostdev__ ValueOffIterator beginValueOff() const
Definition: NanoVDB.h:4411
__hostdev__ DenseIter()
Definition: NanoVDB.h:2840
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:2140
__hostdev__ bool probeValue(const math::Coord &ijk, typename util::remove_const< ChannelT >::type &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:6181
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3890
typename BuildT::RootType RootType
Definition: NanoVDB.h:2015
__hostdev__ const Vec3d & voxelSize() const
Return a const reference to the size of a voxel in world units.
Definition: NanoVDB.h:2075
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5670
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1082
ValueT ValueType
Definition: NanoVDB.h:5092
__hostdev__ uint64_t full() const
Definition: NanoVDB.h:1743
__hostdev__ const char * shortGridName() const
Return a c-string with the name of this grid, truncated to 255 characters.
Definition: NanoVDB.h:2174
__hostdev__ util::enable_if<!util::is_same< MaskT, Mask >::value, Mask & >::type operator=(const MaskT &other)
Assignment operator that works with openvdb::util::NodeMask.
Definition: NanoVDB.h:1164
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:2913
bool type
Definition: NanoVDB.h:499
Visits all tile values in this node, i.e. both inactive and active tiles.
Definition: NanoVDB.h:3344
__hostdev__ math::BBox< CoordT > bbox() const
Return the bounding box in index space of active values in this leaf node.
Definition: NanoVDB.h:4524
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4438
uint16_t ArrayType
Definition: NanoVDB.h:4265
__hostdev__ void next()
Definition: NanoVDB.h:2675
__hostdev__ CheckMode toCheckMode(const Checksum &checksum)
Maps 64 bit checksum to CheckMode enum.
Definition: NanoVDB.h:1782
C++11 implementation of std::enable_if.
Definition: Util.h:335
FloatType mStdDevi
Definition: NanoVDB.h:3757
float type
Definition: NanoVDB.h:506
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3986
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5679
__hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:4515
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4110
__hostdev__ bool isEmpty() const
Definition: NanoVDB.h:5969
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4996
Type Pow2(Type x)
Return x2.
Definition: Math.h:548
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:4480
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:5946
typename util::match_const< DataType, RootT >::type DataT
Definition: NanoVDB.h:2661
void writeUncompressedGrids(const char *fileName, const VecT< GridHandleT > &handles, bool raw=false)
write multiple NanoVDB grids to a single file, without compression.
Definition: NanoVDB.h:6347
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2320
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:2939
Definition: NanoVDB.h:6277
__hostdev__ Vec3d getVoxelSize() const
Return a voxels size in each coordinate direction, measured at the origin.
Definition: NanoVDB.h:1484
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4966
StatsT mStdDevi
Definition: NanoVDB.h:3192
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:2154
__hostdev__ void setBlindData(void *blindData)
Definition: NanoVDB.h:1540
__hostdev__ const Vec3dBBox & worldBBox() const
Definition: NanoVDB.h:5960
__hostdev__ Vec3T applyMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1890
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findFirst() const
Definition: NanoVDB.h:1306
__hostdev__ TileT * tile() const
Definition: NanoVDB.h:2676
__hostdev__ bool isOff(uint32_t n) const
Return true if the given bit is NOT set.
Definition: NanoVDB.h:1193
__hostdev__ Vec3T applyMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1901
__hostdev__ const char * gridName() const
Definition: NanoVDB.h:1958
__hostdev__ ChannelT * setChannel(ChannelT *channelPtr)
Change to an external channel.
Definition: NanoVDB.h:6160
GridBlindDataClass mDataClass
Definition: NanoVDB.h:1529
typename util::match_const< Tile, RootT >::type TileT
Definition: NanoVDB.h:2662
__hostdev__ ChildT * getChild(uint32_t n)
Returns a pointer to the child node at the specifed linear offset.
Definition: NanoVDB.h:3220
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5675
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1481
VDB Tree, which is a thin wrapper around a RootNode.
Definition: NanoVDB.h:2307
__hostdev__ Vec3T applyMapF(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1412
decltype(mFlags) Type
Definition: NanoVDB.h:931
__hostdev__ Vec3T indexToWorld(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2086
math::BBox< CoordType > BBoxType
Definition: NanoVDB.h:2650
__hostdev__ Tile * tile(uint32_t n)
Definition: NanoVDB.h:2564
__hostdev__ DenseIter operator++(int)
Definition: NanoVDB.h:2872
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2762
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5676
__hostdev__ GridClass mapToGridClass(GridClass defaultClass=GridClass::Unknown)
Definition: NanoVDB.h:894
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2546
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:2352
__hostdev__ ValueIterator()
Definition: NanoVDB.h:4421
float Type
Definition: NanoVDB.h:526
float FloatType
Definition: NanoVDB.h:3815
__hostdev__ CoordT origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:4500
Highest level of the data structure. Contains a tree and a world->index transform (that currently onl...
Definition: NanoVDB.h:2011
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4972
__hostdev__ ValueOnIter(RootT *parent)
Definition: NanoVDB.h:2799
Vec3dBBox mWorldBBox
Definition: NanoVDB.h:1822
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3329
__hostdev__ const NodeTrait< RootT, 1 >::type * getFirstLower() const
Definition: NanoVDB.h:2446
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1909
FloatType stdDevi
Definition: NanoVDB.h:6597
__hostdev__ char * toStr(char *dst, GridType gridType)
Maps a GridType to a c-string.
Definition: NanoVDB.h:253
__hostdev__ ValueType maximum() const
Return a const reference to the maximum active value encoded in this leaf node.
Definition: NanoVDB.h:4486
__hostdev__ DenseIterator(const InternalNode *parent)
Definition: NanoVDB.h:3428
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4239
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2897
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this internal node.
Definition: NanoVDB.h:3478
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5377
#define NANOVDB_PATCH_VERSION_NUMBER
Definition: NanoVDB.h:146
__hostdev__ void init(std::initializer_list< GridFlags > list={GridFlags::IsBreadthFirst}, uint64_t gridSize=0u, const Map &map=Map(), GridType gridType=GridType::Unknown, GridClass gridClass=GridClass::Unknown)
Definition: NanoVDB.h:1834
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5146
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3889
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:4117
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &v)
Sets the value at the specified location but leaves its state unchanged.
Definition: NanoVDB.h:4570
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1438
__hostdev__ ValueOnIter()
Definition: NanoVDB.h:2795
Class to access values in channels at a specific voxel location.
Definition: NanoVDB.h:6108
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4252
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3778
static __hostdev__ uint32_t padding()
Definition: NanoVDB.h:4540
typename GridT::TreeType Type
Definition: NanoVDB.h:2293
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:2713
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5674
char mGridName[MaxNameSize]
Definition: NanoVDB.h:1820
__hostdev__ bool operator>(const Version &rhs) const
Definition: NanoVDB.h:703
static __hostdev__ size_t memUsage(uint32_t bitWidth)
Definition: NanoVDB.h:3994
__hostdev__ void setChild(const CoordType &k, const void *ptr, const RootData *data)
Definition: NanoVDB.h:2532
__hostdev__ Version version() const
Definition: NanoVDB.h:2033
PointAccessor(const NanoGrid< Point > &grid)
Definition: NanoVDB.h:6047
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:2615
__hostdev__ ValueType getValue(uint32_t i) const
Definition: NanoVDB.h:3771
__hostdev__ Map(double s, const Vec3d &t=Vec3d(0.0, 0.0, 0.0))
Definition: NanoVDB.h:1372
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:3556
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:3283
__hostdev__ void setLongGridNameOn(bool on=true)
Definition: NanoVDB.h:1879
__hostdev__ Mask(const Mask &other)
Copy constructor.
Definition: NanoVDB.h:1152
static __hostdev__ uint32_t CoordToOffset(const CoordT &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:4598
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4214
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4238
__hostdev__ const BlindDataT * getBlindData(uint32_t n) const
Definition: NanoVDB.h:2204
#define NANOVDB_MAGIC_NUMB
Definition: NanoVDB.h:134
GridClass
Classes (superset of OpenVDB) that are currently supported by NanoVDB.
Definition: NanoVDB.h:290
typename DataType::ValueT ValueType
Definition: NanoVDB.h:2645
uint64_t magic
Definition: NanoVDB.h:6253
__hostdev__ bool isPartial() const
return true if the 64 bit checksum is partial, i.e. of head only
Definition: NanoVDB.h:1752
static T scalar(const T &s)
Definition: NanoVDB.h:738
typename RootT::BuildType BuildType
Definition: NanoVDB.h:2322
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4127
Definition: NanoVDB.h:2741
__hostdev__ void * treePtr()
Definition: NanoVDB.h:1912
uint32_t state
Definition: NanoVDB.h:2552
BuildT BuildType
Definition: NanoVDB.h:3744
__hostdev__ void setDev(const FloatType &v)
Definition: NanoVDB.h:3788
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2319
Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord...
Definition: NanoVDB.h:1671
__hostdev__ bool getDev() const
Definition: NanoVDB.h:4121
#define NANOVDB_MAGIC_FRAG
Definition: NanoVDB.h:138
__hostdev__ bool isValid(GridType gridType, GridClass gridClass)
return true if the combination of GridType and GridClass is valid.
Definition: NanoVDB.h:613
static __hostdev__ bool isAligned(const void *p)
return true if the specified pointer is 32 byte aligned
Definition: NanoVDB.h:547
__hostdev__ void * getRoot()
Get a non-const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2269
__hostdev__ const CoordBBox & indexBBox() const
Definition: NanoVDB.h:5961
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:3340
uint8_t mFlags
Definition: NanoVDB.h:3751
__hostdev__ LeafNodeType * getFirstLeaf()
Template specializations of getFirstNode.
Definition: NanoVDB.h:2443
uint64_t mOffset
Definition: NanoVDB.h:4273
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3791
__hostdev__ ValueIter(RootT *parent)
Definition: NanoVDB.h:2750
static int64_t PtrDiff(const void *p, const void *q)
Compute the distance, in bytes, between two pointers, dist = p - q.
Definition: Util.h:464
__hostdev__ uint32_t gridIndex() const
Return index of this grid in the buffer.
Definition: NanoVDB.h:2046
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:2346
__hostdev__ bool isEmpty() const
Return true if the root is empty, i.e. has not child nodes or constant tiles.
Definition: NanoVDB.h:2278
Definition: NanoVDB.h:2001
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:2617
LeafNodeType Node0
Definition: NanoVDB.h:2329
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:2938
Checksum mChecksum
Definition: NanoVDB.h:1814
Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6544
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:2141
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:6011
typename DataType::StatsT FloatType
Definition: NanoVDB.h:2646
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this root node and any of its child nodes...
Definition: NanoVDB.h:2922
Implements Tree::getValue(math::Coord), i.e. return the value associated with a specific coordinate i...
Definition: NanoVDB.h:1661
BitFlags()
Definition: NanoVDB.h:932
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:3782
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:2737
__hostdev__ bool isActive(const CoordT &ijk) const
Return true if the voxel value at the given coordinate is active.
Definition: NanoVDB.h:4574
__hostdev__ ConstDenseIterator cbeginDense() const
Definition: NanoVDB.h:2884
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4219
ChildT ChildNodeType
Definition: NanoVDB.h:2639
#define NANOVDB_MAGIC_GRID
Definition: NanoVDB.h:135
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4126
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:3413
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this leaf node.
Definition: NanoVDB.h:4479
void set(const MatT &mat, const MatT &invMat, const Vec3T &translate, double taper=1.0)
Initialize the member data from 3x3 or 4x4 matrices.
Definition: NanoVDB.h:1488
static __hostdev__ KeyT CoordToKey(const CoordType &ijk)
Definition: NanoVDB.h:2492
__hostdev__ void setAvg(float avg)
Definition: NanoVDB.h:3865
MaskT< LOG2DIM > ArrayType
Definition: NanoVDB.h:4051
T Type
Definition: NanoVDB.h:463
__hostdev__ bool isActive() const
Definition: NanoVDB.h:3372
uint64_t mMagic
Definition: NanoVDB.h:1813
__hostdev__ ChannelT * setChannel(uint32_t channelID)
Change to an internal channel, assuming it exists as as blind data in the IndexGrid.
Definition: NanoVDB.h:6166
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4125
__hostdev__ bool isOff() const
Return true if none of the bits are set in this Mask.
Definition: NanoVDB.h:1205
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:5945
__hostdev__ uint32_t valueCount() const
Definition: NanoVDB.h:4210
uint64_t mGridSize
Definition: NanoVDB.h:1819
__hostdev__ NodeT * probeChild(ValueType &value) const
Definition: NanoVDB.h:2849
RootT Node3
Definition: NanoVDB.h:2326
PointType
Definition: NanoVDB.h:401
__hostdev__ void toggle(uint32_t n)
Definition: NanoVDB.h:1269
Trait to map from LEVEL to node type.
Definition: NanoVDB.h:4754
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4168
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:2620
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4169
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:2143
__hostdev__ ValueIter()
Definition: NanoVDB.h:2746
__hostdev__ const char * shortGridName() const
Definition: NanoVDB.h:5958
#define NANOVDB_MINOR_VERSION_NUMBER
Definition: NanoVDB.h:145
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5114
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this leaf node.
Definition: NanoVDB.h:4492
uint64_t KeyT
Return a key based on the coordinates of a voxel.
Definition: NanoVDB.h:2490
Vec3d mVoxelSize
Definition: NanoVDB.h:1823
BuildT ValueType
Definition: NanoVDB.h:3743
uint64_t mFlags
Definition: NanoVDB.h:3185
__hostdev__ const uint32_t & getTableSize() const
Definition: NanoVDB.h:2910
int64_t mDataOffset
Definition: NanoVDB.h:1525
__hostdev__ ValueIterator()
Definition: NanoVDB.h:3350
__hostdev__ Checksum(uint32_t head, uint32_t tail)
Constructor that allows the two 32bit checksums to be initiated explicitly.
Definition: NanoVDB.h:1725
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1111
__hostdev__ Mask()
Initialize all bits to zero.
Definition: NanoVDB.h:1139
__hostdev__ bool isCached2(const CoordType &ijk) const
Definition: NanoVDB.h:5358
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3523
Implements Tree::getNodeInfo(math::Coord)
Definition: NanoVDB.h:1675
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4216
__hostdev__ void setStdDeviationOn(bool on=true)
Definition: NanoVDB.h:1881
uint64_t voxelCount
Definition: NanoVDB.h:6279
__hostdev__ uint32_t gridCount() const
Return total number of grids in the buffer.
Definition: NanoVDB.h:2049
__hostdev__ bool isRootConnected() const
return true if RootData follows TreeData in memory without any extra padding
Definition: NanoVDB.h:1996
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:6023
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4994
__hostdev__ bool isValid() const
Methods related to the classification of this grid.
Definition: NanoVDB.h:2139
__hostdev__ void setValue(uint32_t n, const ValueT &v)
Definition: NanoVDB.h:3213
ValueType minimum
Definition: NanoVDB.h:6596
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2893
ChildT UpperNodeType
Definition: NanoVDB.h:2642
uint32_t mGridCount
Definition: NanoVDB.h:1818
CoordT mBBoxMin
Definition: NanoVDB.h:3749
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:5941
__hostdev__ bool isActive() const
Definition: NanoVDB.h:4443
uint64_t FloatType
Definition: NanoVDB.h:781
__hostdev__ const void * nodePtr() const
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:1920
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:3783
__hostdev__ FloatType stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4495
Definition: NanoVDB.h:6593
__hostdev__ bool isValid() const
return true if the magic number and the version are both valid
Definition: NanoVDB.h:1864
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1619
uint64_t FloatType
Definition: NanoVDB.h:775
__hostdev__ RootT & root()
Definition: NanoVDB.h:2344
float mQuantum
Definition: NanoVDB.h:3823
char * strcpy(char *dst, const char *src)
Copy characters from src to dst.
Definition: Util.h:166
double FloatType
Definition: NanoVDB.h:805
static const int MaxNameSize
Definition: NanoVDB.h:1524
__hostdev__ bool isIndex(GridType gridType)
Return true if the GridType maps to a special index type (not a POD integer type).
Definition: NanoVDB.h:602
Map mMap
Definition: NanoVDB.h:1821
#define NANOVDB_MAGIC_FILE
Definition: NanoVDB.h:136
__hostdev__ ValueType minimum() const
Return a const reference to the minimum active value encoded in this leaf node.
Definition: NanoVDB.h:4483
float type
Definition: NanoVDB.h:520
__hostdev__ const uint32_t & tileCount() const
Return the number of tiles encoded in this root node.
Definition: NanoVDB.h:2909
__hostdev__ Vec3T applyJacobian(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1421
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:5950
Utility functions.
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2644
Bit-compacted representation of all three version numbers.
Definition: NanoVDB.h:678
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4193
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:5939
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:5944
__hostdev__ util::enable_if< BuildTraits< T >::is_index, const uint64_t & >::type valueCount() const
Return the total number of values indexed by this IndexGrid.
Definition: NanoVDB.h:2056
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4988
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, ChannelT *channelPtr)
Ctor from an IndexGrid and an external channel.
Definition: NanoVDB.h:6134
__hostdev__ bool operator>=(const Version &rhs) const
Definition: NanoVDB.h:704
typename DataType::ValueT ValueType
Definition: NanoVDB.h:3278
typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1603
static __hostdev__ uint32_t dim()
Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!) ...
Definition: NanoVDB.h:4521
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5600
Definition: NanoVDB.h:3173
static __hostdev__ uint32_t dim()
Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32) ...
Definition: NanoVDB.h:3472
__hostdev__ bool operator==(const Mask &other) const
Definition: NanoVDB.h:1178
__hostdev__ uint32_t gridIndex() const
Definition: NanoVDB.h:5956
__hostdev__ ChildIter & operator++()
Definition: NanoVDB.h:2718
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4308
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4123
__hostdev__ const ChildT * probeChild(ValueType &value) const
Definition: NanoVDB.h:3434
__hostdev__ bool operator==(const Checksum &rhs) const
return true if the checksums are identical
Definition: NanoVDB.h:1772
__hostdev__ const NanoGrid< BuildT > & grid() const
Definition: NanoVDB.h:5997
char * strncpy(char *dst, const char *src, size_t max)
Copies the first num characters of src to dst. If the end of the source C string (which is signaled b...
Definition: Util.h:185
__hostdev__ DenseIterator beginAll() const
Definition: NanoVDB.h:1136
__hostdev__ ConstValueIterator cbeginValueAll() const
Definition: NanoVDB.h:2787
__hostdev__ bool isValid() const
return true if this meta data has a valid combination of semantic, class and value tags ...
Definition: NanoVDB.h:1557
__hostdev__ void disable()
Definition: NanoVDB.h:1761
__hostdev__ const NanoGrid< IndexT > & grid() const
Return a const reference to the IndexGrid.
Definition: NanoVDB.h:6147
static constexpr uint32_t SIZE
Definition: NanoVDB.h:1037
uint32_t mNodeCount[3]
Definition: NanoVDB.h:2257
ValueType mMaximum
Definition: NanoVDB.h:3755
typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1604
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4107
__hostdev__ uint64_t blindDataSize() const
return size in bytes of the blind data represented by this blind meta data
Definition: NanoVDB.h:1585
static __hostdev__ CoordT KeyToCoord(const KeyT &key)
Definition: NanoVDB.h:2500
__hostdev__ const Map & map() const
Return a const reference to the Map for this grid.
Definition: NanoVDB.h:2078
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:4463
__hostdev__ void setRoot(const void *root)
Definition: NanoVDB.h:2263
static __hostdev__ uint32_t wordCount()
Return the number of machine words used by this Mask.
Definition: NanoVDB.h:1047
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:5951
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1110
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4081
typename DataType::BuildT BuildType
Definition: NanoVDB.h:2647
__hostdev__ void setMin(const bool &)
Definition: NanoVDB.h:4075
__hostdev__ bool isValid() const
Definition: NanoVDB.h:5938
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:2616
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4165
__hostdev__ uint32_t tail() const
Definition: NanoVDB.h:1747
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:4067
__hostdev__ bool updateBBox()
Updates the local bounding box of active voxels in this node. Return true if bbox was updated...
Definition: NanoVDB.h:4701
__hostdev__ DenseIter & operator++()
Definition: NanoVDB.h:2866
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:2155
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:2147
__hostdev__ uint64_t last(uint32_t i) const
Definition: NanoVDB.h:4290
bool FloatType
Definition: NanoVDB.h:769
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this internal node and an...
Definition: NanoVDB.h:3495
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:1085
Definition: NanoVDB.h:755
#define __hostdev__
Definition: Util.h:73
__hostdev__ const Checksum & checksum() const
Definition: NanoVDB.h:5967
typename DataType::FloatType FloatType
Definition: NanoVDB.h:4339
#define NANOVDB_DATA_ALIGNMENT
Definition: NanoVDB.h:126
typename DataType::Tile Tile
Definition: NanoVDB.h:2652
__hostdev__ bool isValid(const GridBlindDataClass &blindClass, const GridBlindDataSemantic &blindSemantics, const GridType &blindType)
return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid...
Definition: NanoVDB.h:637
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:5954
__hostdev__ DenseIterator operator++(int)
Definition: NanoVDB.h:1118
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4199
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:3258
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:5948
Definition: NanoVDB.h:925
Coord CoordType
Definition: NanoVDB.h:4341
Dummy type for a 16bit quantization of float point values.
Definition: NanoVDB.h:197
uint8_t ArrayType
Definition: NanoVDB.h:3922
typename Mask< Log2Dim >::template Iterator< On > MaskIterT
Definition: NanoVDB.h:3288
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:2152
__hostdev__ TreeT & tree()
Return a non-const reference to the tree.
Definition: NanoVDB.h:2069
CoordT mBBoxMin
Definition: NanoVDB.h:4268
__hostdev__ void setFirstNode(const NodeT *node)
Definition: NanoVDB.h:2275
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1639
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3492
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3897
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3204
__hostdev__ void setMin(float min)
Definition: NanoVDB.h:3859
__hostdev__ void setValue(uint32_t offset, bool)
Definition: NanoVDB.h:4122
const std::enable_if<!VecTraits< T >::IsVec, T >::type & max(const T &a, const T &b)
Definition: Composite.h:110
__hostdev__ void setMax(const ValueType &v)
Definition: NanoVDB.h:3786
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4130
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3763
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2674
__hostdev__ ChildIter()
Definition: NanoVDB.h:3308
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4305
__hostdev__ BlindDataT * getBlindData(uint32_t n)
Definition: NanoVDB.h:2211
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:3260
__hostdev__ ValueType getLastValue() const
If the last entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getLastValue() on the child.
Definition: NanoVDB.h:3515
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5308
__hostdev__ bool isOn(uint32_t n) const
Return true if the given bit is set.
Definition: NanoVDB.h:1190
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3995
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1461
uint64_t ValueType
Definition: NanoVDB.h:4147
uint16_t ArrayType
Definition: NanoVDB.h:3952
__hostdev__ const MaskType< LOG2DIM > & childMask() const
Return a const reference to the bit mask of child nodes in this internal node.
Definition: NanoVDB.h:3482
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4166
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4167
ValueT value
Definition: NanoVDB.h:2553
__hostdev__ void setDev(float dev)
Definition: NanoVDB.h:3868
Node caching at all (three) tree levels.
Definition: NanoVDB.h:5549
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:2622
__hostdev__ OnIterator beginOn() const
Definition: NanoVDB.h:1132
Definition: NanoVDB.h:1663
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4307
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5678
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4298
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1618
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4251
BuildT BuildType
Definition: NanoVDB.h:5573
Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode) ...
Definition: NanoVDB.h:3739
__hostdev__ const BBoxType & bbox() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2900
GridBlindDataSemantic
Blind-data Semantics that are currently understood by NanoVDB.
Definition: NanoVDB.h:424
Version mVersion
Definition: NanoVDB.h:1815
__hostdev__ void setAverageOn(bool on=true)
Definition: NanoVDB.h:1880
__hostdev__ bool isSequential() const
return true if nodes at all levels can safely be accessed with simple linear offsets ...
Definition: NanoVDB.h:2168
__hostdev__ Map()
Default constructor for the identity map.
Definition: NanoVDB.h:1361
GridFlags
Grid flags which indicate what extra information is present in the grid buffer.
Definition: NanoVDB.h:327
Metafunction used to determine if the first template parameter is a specialization of the class templ...
Definition: Util.h:451
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3963
__hostdev__ uint32_t & checksum(int i)
Definition: NanoVDB.h:1739
__hostdev__ DenseIterator()
Definition: NanoVDB.h:3423
uint32_t nameSize
Definition: NanoVDB.h:6285
ReadAccessor< ValueT, LEVEL0, LEVEL1, LEVEL2 > createAccessor(const NanoGrid< ValueT > &grid)
Free-standing function for convenient creation of a ReadAccessor with optional and customizable node ...
Definition: NanoVDB.h:5871
RootT RootType
Definition: NanoVDB.h:2316
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3833
Definition: GridHandle.h:27
float type
Definition: NanoVDB.h:513
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Definition: NanoVDB.h:5965
CoordBBox bbox
Definition: NanoVDB.h:6598
float Type
Definition: NanoVDB.h:519
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Return true if this tree is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2361
Visits all tile values and child nodes of this node.
Definition: NanoVDB.h:3417
GridType mGridType
Definition: NanoVDB.h:1825
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4250
__hostdev__ uint64_t gridSize() const
Definition: NanoVDB.h:5955
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5120
Definition: NanoVDB.h:1068
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3525
GridType gridType
Definition: NanoVDB.h:6280
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4157
Define static boolean tests for template build types.
Definition: NanoVDB.h:440
__hostdev__ bool isFull() const
return true if the 64 bit checksum is fill, i.e. of both had and nodes
Definition: NanoVDB.h:1756
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:2150
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:2738
char * sprint(char *dst, T var1, Types...var2)
prints a variable number of string and/or numbers to a destination string
Definition: Util.h:286
Bit-mask to encode active states and facilitate sequential iterators and a fast codec for I/O compres...
Definition: NanoVDB.h:1034
CoordT CoordType
Definition: NanoVDB.h:5093
__hostdev__ const GridBlindMetaData * blindMetaData(uint32_t n) const
Returns a const reference to the blindMetaData at the specified linear offset.
Definition: NanoVDB.h:1952
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5141
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3246
static ElementType scalar(const T &v)
Definition: NanoVDB.h:749
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:3379
__hostdev__ void setMax(float max)
Definition: NanoVDB.h:3862
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3896
__hostdev__ bool getMax() const
Definition: NanoVDB.h:4066
uint64_t mData2
Definition: NanoVDB.h:1830
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:2482
float mMinimum
Definition: NanoVDB.h:3822
__hostdev__ uint64_t offset() const
Definition: NanoVDB.h:4287
static __hostdev__ Coord OffsetToLocalCoord(uint32_t n)
Definition: NanoVDB.h:3576
__hostdev__ const Vec3d & voxelSize() const
Return a vector of the axial voxel sizes.
Definition: NanoVDB.h:6153
__hostdev__ constexpr uint32_t strlen()
return the number of characters (including null termination) required to convert enum type to a strin...
Definition: NanoVDB.h:209
typename NanoLeaf< BuildT >::FloatType FloatType
Definition: NanoVDB.h:6592
Definition: NanoVDB.h:4143
KeyT key
Definition: NanoVDB.h:2550
uint64_t FloatType
Definition: NanoVDB.h:787
__hostdev__ uint64_t pointCount() const
Definition: NanoVDB.h:4288
typename DataType::StatsT FloatType
Definition: NanoVDB.h:3279
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5371
__hostdev__ uint32_t & tail()
Definition: NanoVDB.h:1748
bool ValueType
Definition: NanoVDB.h:4048
__hostdev__ Tile * probeTile(const CoordT &ijk)
Definition: NanoVDB.h:2570
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4197
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5038
__hostdev__ uint64_t checksum() const
return the 64 bit checksum of this instance
Definition: NanoVDB.h:1737
Dummy type for a voxel whose value equals an offset into an external value array of active values...
Definition: NanoVDB.h:176
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:4378
Top-most node of the VDB tree structure.
Definition: NanoVDB.h:2635
int64_t child
Definition: NanoVDB.h:3176
#define NANOVDB_MAJOR_VERSION_NUMBER
Definition: NanoVDB.h:144
__hostdev__ Vec3T applyJacobianF(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1430
uint8_t ArrayType
Definition: NanoVDB.h:3885
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3767
Struct to derive node type from its level in a given grid, tree or root while preserving constness...
Definition: NanoVDB.h:1596
typename GridT::TreeType type
Definition: NanoVDB.h:2294
__hostdev__ Codec toCodec(const char *str)
Definition: NanoVDB.h:6240
Definition: NanoVDB.h:2690
uint32_t level
Definition: NanoVDB.h:6595
uint16_t padding
Definition: NanoVDB.h:6289
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3933
uint32_t mTileCount[3]
Definition: NanoVDB.h:2258
typename RootT::ChildNodeType Node2
Definition: NanoVDB.h:2327
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3405
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3489
ValueType mMinimum
Definition: NanoVDB.h:3754
__hostdev__ const void * blindData(uint32_t n) const
Returns a const pointer to the blindData at the specified linear offset.
Definition: NanoVDB.h:2196
__hostdev__ GridType toGridType()
Maps from a templated build type to a GridType enum.
Definition: NanoVDB.h:812
size_t strlen(const char *str)
length of a c-sting, excluding &#39;\0&#39;.
Definition: Util.h:153
static __hostdev__ uint32_t dim()
Definition: NanoVDB.h:4334
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:5661
uint64_t Type
Definition: NanoVDB.h:470
uint64_t type
Definition: NanoVDB.h:478
Definition: IndexIterator.h:43
__hostdev__ float getDev() const
return the quantized standard deviation of the active values in this node
Definition: NanoVDB.h:3856
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3400
ValueT mMaximum
Definition: NanoVDB.h:3190
__hostdev__ uint64_t idx(int i, int j, int k) const
Definition: NanoVDB.h:6173
static __hostdev__ CoordT OffsetToLocalCoord(uint32_t n)
Compute the local coordinates from a linear offset.
Definition: NanoVDB.h:4505
__hostdev__ const math::BBox< CoordType > & bbox() const
Return a const reference to the bounding box in index space of active values in this internal node an...
Definition: NanoVDB.h:3504
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3522
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this root node...
Definition: NanoVDB.h:2925
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4291
__hostdev__ bool operator<=(const Version &rhs) const
Definition: NanoVDB.h:702
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:4064
T ElementType
Definition: NanoVDB.h:737
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2019
float Type
Definition: NanoVDB.h:533
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5201
uint64_t Type
Definition: NanoVDB.h:477
__hostdev__ uint32_t getMinor() const
Definition: NanoVDB.h:707
Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode) ...
Definition: NanoVDB.h:2480
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3334
Data encoded at the head of each segment of a file or stream.
Definition: NanoVDB.h:6252
__hostdev__ ValueIterator operator++(int)
Definition: NanoVDB.h:4454
__hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const
Return the index of the first blind data with specified semantic if found, otherwise -1...
Definition: NanoVDB.h:2224
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3828
__hostdev__ ValueOffIterator(const LeafNode *parent)
Definition: NanoVDB.h:4393
openvdb::GridBase Grid
Definition: Utils.h:34
__hostdev__ Mask(bool on)
Definition: NanoVDB.h:1144
__hostdev__ void setOff(uint32_t n)
Set the specified bit off.
Definition: NanoVDB.h:1216
__hostdev__ const char * gridName() const
Return a c-string with the name of this grid.
Definition: NanoVDB.h:2171
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4399
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:5942
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2318
double FloatType
Definition: NanoVDB.h:763
Version version
Definition: NanoVDB.h:6290
__hostdev__ int blindDataCount() const
Definition: NanoVDB.h:5963
uint64_t type
Definition: NanoVDB.h:485
__hostdev__ ChildT * getChild(const Tile *tile)
Returns a const reference to the child node in the specified tile.
Definition: NanoVDB.h:2603
__hostdev__ const Checksum & checksum() const
Return checksum of the grid buffer.
Definition: NanoVDB.h:2177
Index64 memUsage(const TreeT &tree, bool threaded=true)
Return the total amount of memory in bytes occupied by this tree.
Definition: Count.h:493
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5296
GridClass mGridClass
Definition: NanoVDB.h:1824
__hostdev__ Version(uint32_t data)
Constructor from a raw uint32_t data representation.
Definition: NanoVDB.h:691
Dummy type for a voxel whose value equals an offset into an external value array. ...
Definition: NanoVDB.h:173
Maps one type (e.g. the build types above) to other (actual) types.
Definition: NanoVDB.h:461
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2339
__hostdev__ uint32_t nodeCount(uint32_t level) const
Definition: NanoVDB.h:5966
__hostdev__ ValueType getLastValue() const
Return the last value in this leaf node.
Definition: NanoVDB.h:4560
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3871
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5150
__hostdev__ bool isHalf() const
Definition: NanoVDB.h:1753
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4198
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4062
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:2512
__hostdev__ ValueIterator(const LeafNode *parent)
Definition: NanoVDB.h:4426
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4960
typename RootT::ValueType ValueType
Definition: NanoVDB.h:4942
__hostdev__ uint32_t id() const
Definition: NanoVDB.h:705
__hostdev__ size_t memUsage() const
Definition: NanoVDB.h:3993
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:4300
__hostdev__ const NodeT * getFirstNode() const
return a const pointer to the first node of the specified type
Definition: NanoVDB.h:2418
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:2649
__hostdev__ uint32_t getPatch() const
Definition: NanoVDB.h:708
Definition: NanoVDB.h:2790
__hostdev__ DenseIter(RootT *parent)
Definition: NanoVDB.h:2844
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this internal ...
Definition: NanoVDB.h:3501
__hostdev__ void setOn(uint32_t n)
Set the specified bit on.
Definition: NanoVDB.h:1214
__hostdev__ const uint64_t & firstOffset() const
Definition: NanoVDB.h:4164
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4371
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1470
__hostdev__ bool isCompatible() const
Definition: NanoVDB.h:709
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5145
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5367
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3932
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5374
__hostdev__ const ValueType & background() const
Return a const reference to the background value.
Definition: NanoVDB.h:2364
const typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1611
__hostdev__ int age() const
Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER.
Definition: NanoVDB.h:713
__hostdev__ bool isRootNext() const
return true if RootData is layout out immediately after TreeData in memory
Definition: NanoVDB.h:2284
__hostdev__ NodeT * getFirstNode()
return a pointer to the first node of the specified type
Definition: NanoVDB.h:2408
__hostdev__ const NodeTrait< RootT, 2 >::type * getFirstUpper() const
Definition: NanoVDB.h:2448
#define NANOVDB_MAGIC_NODE
Definition: NanoVDB.h:137
CheckMode
List of different modes for computing for a checksum.
Definition: NanoVDB.h:1680
__hostdev__ void setAvg(const FloatType &v)
Definition: NanoVDB.h:3787
__hostdev__ uint8_t bitWidth() const
Definition: NanoVDB.h:3992
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this root node and any of...
Definition: NanoVDB.h:2919
MatType scale(const Vec3< typename MatType::value_type > &s)
Return a matrix that scales by s.
Definition: Mat.h:615
__hostdev__ ValueIter operator++(int)
Definition: NanoVDB.h:2775
bool isValid() const
Definition: NanoVDB.h:6257
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3198
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4433
uint16_t gridCount
Definition: NanoVDB.h:6255
__hostdev__ void extrema(ValueType &min, ValueType &max) const
Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2468
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1083
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3843
__hostdev__ T & getValue(const math::Coord &ijk, T *channelPtr) const
Return the value from a specified channel that maps to the specified coordinate.
Definition: NanoVDB.h:6192
typename Node2::ChildNodeType Node1
Definition: NanoVDB.h:2328
Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
Definition: NanoVDB.h:188
static __hostdev__ uint64_t memUsage()
return memory usage in bytes for the class
Definition: NanoVDB.h:2342
RootT RootNodeType
Definition: NanoVDB.h:2317
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1903
__hostdev__ uint64_t first(uint32_t i) const
Definition: NanoVDB.h:4289
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4240
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:5953
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:2146
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5302
uint32_t countOn(uint64_t v)
Definition: Util.h:622
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, uint32_t channelID=0u)
Ctor from an IndexGrid and an integer ID of an internal channel that is assumed to exist as blind dat...
Definition: NanoVDB.h:6123
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:6001
void ArrayType
Definition: NanoVDB.h:4149
__hostdev__ ChannelT & operator()(const math::Coord &ijk) const
Definition: NanoVDB.h:6177
__hostdev__ uint32_t countOn(uint32_t i) const
Return the number of lower set bits in mask up to but excluding the i&#39;th bit.
Definition: NanoVDB.h:1059
__hostdev__ ChildIter()
Definition: NanoVDB.h:2697
__hostdev__ bool hasStats() const
Definition: NanoVDB.h:4162
Definition: NanoVDB.h:1522
__hostdev__ uint64_t memUsage() const
Return the actual memory footprint of this root node.
Definition: NanoVDB.h:2931
int64_t child
Definition: NanoVDB.h:2551
__hostdev__ void fill(const ValueType &v)
Definition: NanoVDB.h:3793
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:4120
BuildT ArrayType
Definition: NanoVDB.h:3746
uint32_t mBlindMetadataCount
Definition: NanoVDB.h:1827
__hostdev__ OffIterator beginOff() const
Definition: NanoVDB.h:1134
__hostdev__ DenseIterator beginDense()
Definition: NanoVDB.h:2883
BuildT BuildType
Definition: NanoVDB.h:5091
Version version
Definition: NanoVDB.h:6254
__hostdev__ bool getMin() const
Definition: NanoVDB.h:4065
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4476
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4998
__hostdev__ bool getMax() const
Definition: NanoVDB.h:4119
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2806
bool BuildType
Definition: NanoVDB.h:4049
__hostdev__ CoordT origin() const
Definition: NanoVDB.h:2549
__hostdev__ bool operator<(const Version &rhs) const
Definition: NanoVDB.h:701
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5375
__hostdev__ const Vec3dBBox & worldBBox() const
return AABB of active values in world space
Definition: NanoVDB.h:1978
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4074
__hostdev__ uint8_t flags() const
Definition: NanoVDB.h:4497
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:3249
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2018
__hostdev__ bool isEmpty() const
Return true if this RootNode is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2934
VecT< GridHandleT > readUncompressedGrids(const char *fileName, const typename GridHandleT::BufferType &buffer=typename GridHandleT::BufferType())
Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
Definition: NanoVDB.h:6422
uint64_t Type
Definition: NanoVDB.h:540
CoordT mBBoxMin
Definition: NanoVDB.h:4152
__hostdev__ uint32_t checksum(int i) const
Definition: NanoVDB.h:1741
__hostdev__ bool operator!=(const Mask &other) const
Definition: NanoVDB.h:1187
CoordT CoordType
Definition: NanoVDB.h:5276
Dummy type for a variable bit quantization of floating point values.
Definition: NanoVDB.h:200
__hostdev__ Vec3T indexToWorldF(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2109
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:5943
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:5952
__hostdev__ const MaskType< LOG2DIM > & getChildMask() const
Definition: NanoVDB.h:3483
StatsT mAverage
Definition: NanoVDB.h:2518
Visits all values in a leaf node, i.e. both active and inactive values.
Definition: NanoVDB.h:4415
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:3257
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:2153
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3371
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2548
Visits active tile values of this node only.
Definition: NanoVDB.h:3383
__hostdev__ const NodeTrait< RootT, LEVEL >::type * getFirstNode() const
return a const pointer to the first node of the specified level
Definition: NanoVDB.h:2437
#define NANOVDB_HOSTDEV_DISABLE_WARNING
Definition: Util.h:94
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3772
Visits all inactive values in a leaf node.
Definition: NanoVDB.h:4382
__hostdev__ const TreeType & tree() const
Return a const reference to the tree of the IndexGrid.
Definition: NanoVDB.h:6150
Like ValueIndex but with a mutable mask.
Definition: NanoVDB.h:179
typename RootT::ValueType ValueType
Definition: NanoVDB.h:2321
static __hostdev__ uint64_t memUsage(uint32_t tableSize)
Return the expected memory footprint in bytes with the specified number of tiles. ...
Definition: NanoVDB.h:2928
Definition: NanoVDB.h:1665
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:4462
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:4111
GridMetaData(const NanoGrid< T > &grid)
Definition: NanoVDB.h:5903
float FloatType
Definition: NanoVDB.h:757
__hostdev__ CheckMode mode() const
return the mode of the 64 bit checksum
Definition: NanoVDB.h:1764
__hostdev__ bool isMask() const
Definition: NanoVDB.h:2148
__hostdev__ Vec3T applyJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1905
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5148
__hostdev__ void setMax(const bool &)
Definition: NanoVDB.h:4076
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2017
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:2483
typename BuildT::ValueType ValueType
Definition: NanoVDB.h:2021
__hostdev__ uint32_t nodeCount() const
Return number of nodes at LEVEL.
Definition: NanoVDB.h:1943
float ValueType
Definition: NanoVDB.h:3814
__hostdev__ Mask & operator&=(const Mask &other)
Bitwise intersection.
Definition: NanoVDB.h:1272
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5468
__hostdev__ const TreeT & tree() const
Return a const reference to the tree.
Definition: NanoVDB.h:2066
__hostdev__ bool safeCast() const
return true if the RootData follows right after the TreeData. If so, this implies that it&#39;s safe to c...
Definition: NanoVDB.h:5926
uint32_t findLowestOn(uint32_t v)
Returns the index of the lowest, i.e. least significant, on bit in the specified 32 bit word...
Definition: Util.h:502
__hostdev__ ChannelT & getValue(const math::Coord &ijk) const
Return the value from a cached channel that maps to the specified coordinate.
Definition: NanoVDB.h:6176
uint64_t mData1
Definition: NanoVDB.h:1829
BitFlags(Type mask)
Definition: NanoVDB.h:933
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:2996
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:3445
bool streq(const char *lhs, const char *rhs)
Test if two null-terminated byte strings are the same.
Definition: Util.h:268
__hostdev__ Vec3T worldToIndexDirF(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2119
__hostdev__ Iterator()
Definition: NanoVDB.h:1071
typename ChildT::template MaskType< LOG2DIM > MaskT
Definition: NanoVDB.h:3170
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4218
BitFlags< 32 > mFlags
Definition: NanoVDB.h:1816
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1907
__hostdev__ ValueOnIter operator++(int)
Definition: NanoVDB.h:2819
uint8_t mFlags
Definition: NanoVDB.h:4270
__hostdev__ void setAvg(const bool &)
Definition: NanoVDB.h:4077
__hostdev__ void setMin(const ValueType &v)
Definition: NanoVDB.h:3785
__hostdev__ bool getMin() const
Definition: NanoVDB.h:4118
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:2144
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:3341
void writeUncompressedGrid(StreamT &os, const GridData *gridData, bool raw=false)
This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also works if client code only includes PNanoVDB.h!
Definition: NanoVDB.h:6318
__hostdev__ bool isEmpty() const
test if the grid is empty, e.i the root table has size 0
Definition: NanoVDB.h:1992
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:6068
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:3324
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3820
#define __device__
Definition: Util.h:79
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1892
int64_t mBlindMetadataOffset
Definition: NanoVDB.h:1826
float mTaperF
Definition: NanoVDB.h:1354
Implements Tree::probeLeaf(math::Coord)
Definition: NanoVDB.h:1673
__hostdev__ ChildIter(RootT *parent)
Definition: NanoVDB.h:2701
__hostdev__ void setValue(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3773
__hostdev__ CoordBBox bbox() const
Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2281
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4155
typename RootT::CoordType CoordType
Definition: NanoVDB.h:4943
__hostdev__ MagicType toMagic(uint64_t magic)
maps 64 bits of magic number to enum
Definition: NanoVDB.h:368
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3469
GridClass gridClass
Definition: NanoVDB.h:6281
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:2142
Codec codec
Definition: NanoVDB.h:6256
RootType RootNodeType
Definition: NanoVDB.h:2016
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5632
static __hostdev__ uint64_t memUsage()
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:1975
uint64_t gridSize
Definition: NanoVDB.h:6279
__hostdev__ void setValueOnly(const CoordT &ijk, const ValueType &v)
Definition: NanoVDB.h:4571
__hostdev__ const NodeT * getNode() const
Return a const point to the cached node of the specified type.
Definition: NanoVDB.h:5616
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3957
__hostdev__ bool isFloatingPoint(GridType gridType)
return true if the GridType maps to a floating point type
Definition: NanoVDB.h:563
CoordT mBBoxMin
Definition: NanoVDB.h:4054
__hostdev__ void setOff()
Set all bits off.
Definition: NanoVDB.h:1252
__hostdev__ void localToGlobalCoord(Coord &ijk) const
modifies local coordinates to global coordinates of a tile or child node
Definition: NanoVDB.h:3584
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:2621
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3366
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3526
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:2942
__hostdev__ Vec3T indexToWorldGrad(const Vec3T &grad) const
transform the gradient from index space to world space.
Definition: NanoVDB.h:2101
__hostdev__ uint64_t activeVoxelCount() const
Computes a AABB of active values in world space.
Definition: NanoVDB.h:2136
__hostdev__ const LeafNode * probeLeaf(const CoordT &) const
Definition: NanoVDB.h:4595
__hostdev__ uint64_t * words()
Return a pointer to the list of words of the bit mask.
Definition: NanoVDB.h:1159
__hostdev__ void init(float min, float max, uint8_t bitWidth)
Definition: NanoVDB.h:3837
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4997
__hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
Constructor from major.minor.patch version numbers.
Definition: NanoVDB.h:693
__hostdev__ Mask & operator|=(const Mask &other)
Bitwise union.
Definition: NanoVDB.h:1280
static __hostdev__ uint32_t voxelCount()
Return the total number of voxels (e.g. values) encoded in this leaf node.
Definition: NanoVDB.h:4538
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:3167
__hostdev__ DataType * data()
Definition: NanoVDB.h:4474
__hostdev__ Mask & operator-=(const Mask &other)
Bitwise difference.
Definition: NanoVDB.h:1288
__hostdev__ Checksum(uint64_t checksum, CheckMode mode=CheckMode::Full)
Definition: NanoVDB.h:1730
static __hostdev__ bool safeCast(const NanoGrid< T > &grid)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5937
__hostdev__ ValueIterator beginValue()
Definition: NanoVDB.h:2786
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5605
OutGridT const XformOp bool bool
Definition: ValueTransformer.h:609
uint64_t mPointCount
Definition: NanoVDB.h:4274
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4995
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4161
__hostdev__ CoordType origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:3486
__hostdev__ DenseIterator(uint32_t pos=Mask::SIZE)
Definition: NanoVDB.h:1105
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4404
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4249
PointAccessor(const NanoGrid< BuildT > &grid)
Definition: NanoVDB.h:5984
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arit...
Definition: NanoVDB.h:1479
__hostdev__ void setValueOnly(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4292
__hostdev__ uint64_t getIndex(const math::Coord &ijk) const
Return the linear offset into a channel that maps to the specified coordinate.
Definition: NanoVDB.h:6172
ValueT mMinimum
Definition: NanoVDB.h:3189
__hostdev__ bool setGridName(const char *src)
Definition: NanoVDB.h:1882
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4241
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4057
__hostdev__ void localToGlobalCoord(Coord &ijk) const
Converts (in place) a local index coordinate to a global index coordinate.
Definition: NanoVDB.h:4513
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5582
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4311
uint64_t ValueType
Definition: NanoVDB.h:4262
Dummy type for a 8bit quantization of float point values.
Definition: NanoVDB.h:194
__hostdev__ DataType * data()
Definition: NanoVDB.h:2337
typename NanoLeaf< BuildT >::ValueType ValueType
Definition: NanoVDB.h:6591
MagicType
Enums used to identify magic numbers recognized by NanoVDB.
Definition: NanoVDB.h:357
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5817
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2861
Dummy type for a voxel whose value equals its binary active state.
Definition: NanoVDB.h:185
uint8_t mFlags
Definition: NanoVDB.h:4154
uint64_t mPrefixSum
Definition: NanoVDB.h:4156
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:2682
__hostdev__ Vec3T applyJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1894
__hostdev__ util::enable_if< util::is_same< T, Point >::value, const uint64_t & >::type pointCount() const
Return the total number of points indexed by this PointGrid.
Definition: NanoVDB.h:2063
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5319
__hostdev__ ChildIter(ParentT *parent)
Definition: NanoVDB.h:3313
uint32_t mGridIndex
Definition: NanoVDB.h:1817
__hostdev__ ValueOnIterator(const LeafNode *parent)
Definition: NanoVDB.h:4360
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5373
uint64_t mVoxelCount
Definition: NanoVDB.h:2259
static __hostdev__ uint32_t CoordToOffset(const CoordType &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:3568
Vec3d voxelSize
Definition: NanoVDB.h:6284
__hostdev__ uint32_t nodeCount() const
Definition: NanoVDB.h:2387
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5372
uint64_t type
Definition: NanoVDB.h:471
GridBlindDataSemantic mSemantic
Definition: NanoVDB.h:1528
__hostdev__ Vec3T applyMap(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1404
CoordT CoordType
Definition: NanoVDB.h:5575
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3410
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3769
__hostdev__ const CoordBBox & indexBBox() const
return AABB of active values in index space
Definition: NanoVDB.h:1981
__hostdev__ bool isFloatingPointVector(GridType gridType)
return true if the GridType maps to a floating point vec3.
Definition: NanoVDB.h:577
ValueT mBackground
Definition: NanoVDB.h:2515
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1640
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:3389
__hostdev__ bool isInteger(GridType gridType)
Return true if the GridType maps to a POD integer type.
Definition: NanoVDB.h:589
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2348
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:6078
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findPrev(uint32_t start) const
Definition: NanoVDB.h:1330
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:2940
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2757
math::Extrema extrema(const IterT &iter, bool threaded=true)
Iterate over a scalar grid and compute extrema (min/max) of the values of the voxels that are visited...
Definition: Statistics.h:354
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5126
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1626
Codec codec
Definition: NanoVDB.h:6288
Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation...
Definition: NanoVDB.h:1349
uint64_t FloatType
Definition: NanoVDB.h:4148
float Type
Definition: NanoVDB.h:512
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache Noop since this template specializa...
Definition: NanoVDB.h:4979
__hostdev__ const Tile * probeTile(const CoordT &ijk) const
Definition: NanoVDB.h:2595
#define NANOVDB_ASSERT(x)
Definition: Util.h:50
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1633
GridType
List of types that are currently supported by NanoVDB.
Definition: NanoVDB.h:219
Vec3dBBox worldBBox
Definition: NanoVDB.h:6282
uint32_t mValueSize
Definition: NanoVDB.h:1527
typename BuildT::CoordType CoordType
Definition: NanoVDB.h:2023
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:4379
__hostdev__ DenseIterator & operator++()
Definition: NanoVDB.h:1113
__hostdev__ void setOn()
Set all bits on.
Definition: NanoVDB.h:1246
__hostdev__ FloatType average() const
Return a const reference to the average of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4489
Class to access points at a specific voxel location.
Definition: NanoVDB.h:5977
__hostdev__ Mask & operator^=(const Mask &other)
Bitwise XOR.
Definition: NanoVDB.h:1296
static __hostdev__ bool safeCast(const GridData *gridData)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5930
ValueT mMaximum
Definition: NanoVDB.h:2517
static __hostdev__ uint64_t alignmentPadding(const void *p)
return the smallest number of bytes that when added to the specified pointer results in a 32 byte ali...
Definition: NanoVDB.h:550
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5100
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:1090
ValueT mMinimum
Definition: NanoVDB.h:2516
__hostdev__ ChildIter operator++(int)
Definition: NanoVDB.h:2726
bool FloatType
Definition: NanoVDB.h:799
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Return the total number of active tiles at the specified level of the tree.
Definition: NanoVDB.h:2380
C++11 implementation of std::is_floating_point.
Definition: Util.h:329
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:4303
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3361
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4981
uint64_t mValueCount
Definition: NanoVDB.h:1526
__hostdev__ DataType * data()
Definition: NanoVDB.h:3467
const typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1655
__hostdev__ const BlindDataT * getBlindData() const
Get a const pointer to the blind data represented by this meta data.
Definition: NanoVDB.h:1550
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:3259
__hostdev__ ValueT getValue(uint32_t n) const
Definition: NanoVDB.h:3231
static DstT * PtrAdd(void *p, int64_t offset)
Adds a byte offset to a non-const pointer to produce another non-const pointer.
Definition: Util.h:478
__hostdev__ void setValue(const CoordT &ijk, const ValueType &v)
Sets the value at the specified location and activate its state.
Definition: NanoVDB.h:4565
__hostdev__ ValueOnIter & operator++()
Definition: NanoVDB.h:2811
__hostdev__ float getAvg() const
return the quantized average of the active values in this node
Definition: NanoVDB.h:3852
Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data and one f...
Definition: NanoVDB.h:1706
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5680
__hostdev__ BaseIter(DataT *data=nullptr, uint32_t n=0)
Definition: NanoVDB.h:2665
__hostdev__ uint64_t activeVoxelCount() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2373
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4993
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:5940
__hostdev__ float getMax() const
return the quantized maximum of the active values in this node
Definition: NanoVDB.h:3849
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:3166
__hostdev__ bool getDev() const
Definition: NanoVDB.h:4068
Implements Tree::getDim(math::Coord)
Definition: NanoVDB.h:1669
Definition: NanoVDB.h:2658
__hostdev__ const ValueType & background() const
Return the total number of active voxels in the root and all its child nodes.
Definition: NanoVDB.h:2906
__hostdev__ DenseIterator beginDense() const
Definition: NanoVDB.h:3458
Codec
Define compression codecs.
Definition: NanoVDB.h:6224
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Definition: NanoVDB.h:1898
__hostdev__ uint32_t countOn() const
Return the total number of set bits in this Mask.
Definition: NanoVDB.h:1050
uint8_t mFlags
Definition: NanoVDB.h:3819
__hostdev__ bool isChild(uint32_t n) const
Definition: NanoVDB.h:3243
Internal nodes of a VDB tree.
Definition: NanoVDB.h:3274
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:4355
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:4301
__hostdev__ ConstDenseIterator cbeginChildAll() const
Definition: NanoVDB.h:2885
static __hostdev__ T * alignPtr(T *p)
offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers...
Definition: NanoVDB.h:558
__hostdev__ bool isOn() const
Return true if all the bits are set in this Mask.
Definition: NanoVDB.h:1196
ValueT ValueType
Definition: NanoVDB.h:5574
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4992
BuildT TreeType
Definition: NanoVDB.h:2014
Base-class for quantized float leaf nodes.
Definition: NanoVDB.h:3810
uint64_t FloatType
Definition: NanoVDB.h:793
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:3184
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:2943
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:2619
__hostdev__ Vec3T worldToIndexF(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2105
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4061
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1625
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4215
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3450
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:5133
__hostdev__ Vec3T worldToIndex(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2082
Definition: NanoVDB.h:2254
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4285
C++11 implementation of std::is_same.
Definition: Util.h:314
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5594
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3826
__hostdev__ void setValue(uint32_t offset, bool v)
Definition: NanoVDB.h:4069
__hostdev__ bool isActive() const
Return true if this node or any of its child nodes contain active values.
Definition: NanoVDB.h:3598
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel (regardless of state or location in the tree.) ...
Definition: NanoVDB.h:2351
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4195
Struct with all the member data of the InternalNode (useful during serialization of an openvdb Intern...
Definition: NanoVDB.h:3164
static __hostdev__ constexpr int64_t memUsage()
Definition: NanoVDB.h:3925
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:2990
__hostdev__ const NanoGrid< Point > & grid() const
Definition: NanoVDB.h:6064
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3956
const typename GridT::TreeType Type
Definition: NanoVDB.h:2299
Dummy type for a 4bit quantization of float point values.
Definition: NanoVDB.h:191
__hostdev__ bool operator!=(const Checksum &rhs) const
return true if the checksums are not identical
Definition: NanoVDB.h:1776
__hostdev__ uint64_t gridSize() const
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2043
__hostdev__ Version version() const
Definition: NanoVDB.h:5970
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:3169
uint64_t mCRC64
Definition: NanoVDB.h:1712
__hostdev__ uint64_t & full()
Definition: NanoVDB.h:1744
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4124
Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6532
uint64_t type
Definition: NanoVDB.h:492
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:4063
const typename GridT::TreeType type
Definition: NanoVDB.h:2300
__hostdev__ NodeTrait< RootT, 1 >::type * getFirstLower()
Definition: NanoVDB.h:2445
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5147
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this internal node and any of its child nodes...
Definition: NanoVDB.h:3498
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:3248
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:6089
uint8_t mFlags
Definition: NanoVDB.h:4056
T type
Definition: Util.h:387
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4217
__hostdev__ void setBBoxOn(bool on=true)
Definition: NanoVDB.h:1878
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:2149
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:2524
__hostdev__ uint32_t head() const
Definition: NanoVDB.h:1745
T type
Definition: NanoVDB.h:464
__hostdev__ ValueIterator & operator++()
Definition: NanoVDB.h:4449
__hostdev__ uint32_t blindDataCount() const
Return true if this grid is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2183
__hostdev__ void setChild(uint32_t n, const void *ptr)
Definition: NanoVDB.h:3206
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1896
__hostdev__ bool operator==(const Version &rhs) const
Definition: NanoVDB.h:700
Struct with all the member data of the Grid (useful during serialization of an openvdb grid) ...
Definition: NanoVDB.h:1810
typename ChildT::template MaskType< LOG2 > MaskType
Definition: NanoVDB.h:3286
auto callNanoGrid(GridDataT *gridData, ArgsT &&...args)
Below is an example of the struct used for generic programming with callNanoGrid. ...
Definition: NanoVDB.h:4855
Implements Tree::isActive(math::Coord)
Definition: NanoVDB.h:1667
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1449
Definition: NanoVDB.h:728
__hostdev__ bool probeValue(const CoordT &ijk, ValueType &v) const
Return true if the voxel value at the given coordinate is active and updates v with the value...
Definition: NanoVDB.h:4588
__hostdev__ NodeTrait< RootT, 2 >::type * getFirstUpper()
Definition: NanoVDB.h:2447
__hostdev__ void toggle()
brief Toggle the state of all bits in the mask
Definition: NanoVDB.h:1264
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:2145
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4306
uint32_t mData0
Definition: NanoVDB.h:1828
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4366
Dummy type for indexing points into voxels.
Definition: NanoVDB.h:203
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:3479
__hostdev__ const void * blindData() const
Definition: NanoVDB.h:1543
__hostdev__ ValueType getValue(const CoordT &ijk) const
Return the voxel value at the given coordinate.
Definition: NanoVDB.h:4555
static __hostdev__ size_t memUsage()
Return memory usage in bytes for the class.
Definition: NanoVDB.h:3475
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:3319
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:3168
Definition: NanoVDB.h:902
__hostdev__ bool isActive(const CoordType &ijk) const
Return the active state of the given voxel (regardless of state or location in the tree...
Definition: NanoVDB.h:2355
__hostdev__ const ChildT * getChild(uint32_t n) const
Definition: NanoVDB.h:3225
uint32_t findHighestOn(uint32_t v)
Returns the index of the highest, i.e. most significant, on bit in the specified 32 bit word...
Definition: Util.h:572
DataT * mData
Definition: NanoVDB.h:2663
__hostdev__ uint64_t activeVoxelCount() const
Definition: NanoVDB.h:5964
bool Type
Definition: NanoVDB.h:498
Definition: NanoVDB.h:1102
__hostdev__ ValueType getFirstValue() const
If the first entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getFirstValue() on the child.
Definition: NanoVDB.h:3508
StatsT mAverage
Definition: NanoVDB.h:3191
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3964
Definition: NanoVDB.h:2834
__hostdev__ const Map & map() const
Definition: NanoVDB.h:5959
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:3380
CoordT mBBoxMin
Definition: NanoVDB.h:3817
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:2708
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:2484
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4196
__hostdev__ ValueType getValue(uint32_t offset) const
Return the voxel value at the given offset.
Definition: NanoVDB.h:4552
__hostdev__ ValueIter & operator++()
Definition: NanoVDB.h:2767
MaskT mValueMask
Definition: NanoVDB.h:3186
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findNext(uint32_t start) const
Definition: NanoVDB.h:1316
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:2677
__hostdev__ uint32_t totalNodeCount() const
Definition: NanoVDB.h:2399
uint16_t mMin
Definition: NanoVDB.h:3824
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:2485
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1632
__hostdev__ Vec3d voxelSize() const
Definition: NanoVDB.h:5962
typename FloatTraits< ValueType >::FloatType FloatType
Definition: NanoVDB.h:4264
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:2614
Like ValueOnIndex but with a mutable mask.
Definition: NanoVDB.h:182
const std::enable_if<!VecTraits< T >::IsVec, T >::type & min(const T &a, const T &b)
Definition: Composite.h:106
GridMetaData(const GridData *gridData)
Definition: NanoVDB.h:5910
const typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1610
__hostdev__ DataType * data()
Definition: NanoVDB.h:2035
MaskT< LOG2DIM > mValues
Definition: NanoVDB.h:4058
This is a convenient class that allows for access to grid meta-data that are independent of the value...
Definition: NanoVDB.h:5894
__hostdev__ int findBlindData(const char *name) const
Return the index of the first blind data with specified name if found, otherwise -1.
Definition: NanoVDB.h:2234
__hostdev__ uint32_t gridCount() const
Definition: NanoVDB.h:5957
uint32_t mTableSize
Definition: NanoVDB.h:2513
typename BuildT::BuildType BuildType
Definition: NanoVDB.h:2022
typename T::ValueType ElementType
Definition: NanoVDB.h:748
__hostdev__ bool isMask() const
Definition: NanoVDB.h:5947
__hostdev__ uint64_t memUsage() const
return memory usage in bytes for the leaf node
Definition: NanoVDB.h:4543
__hostdev__ bool isSequential() const
return true if the specified node type is layed out breadth-first in memory and has a fixed size...
Definition: NanoVDB.h:2160
Definition: NanoVDB.h:4330
typename RootT::CoordType CoordType
Definition: NanoVDB.h:2323
float type
Definition: NanoVDB.h:534
defines a tree type from a grid type while preserving constness
Definition: NanoVDB.h:2291
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5376
__hostdev__ GridType mapToGridType()
Definition: NanoVDB.h:872
__hostdev__ uint32_t nodeCount(int level) const
Definition: NanoVDB.h:2393
__hostdev__ ChannelT & operator()(int i, int j, int k) const
Definition: NanoVDB.h:6178
__hostdev__ AccessorType getAccessor() const
Return a new instance of a ReadAccessor used to access values in this grid.
Definition: NanoVDB.h:2072
Visits child nodes of this node only.
Definition: NanoVDB.h:3300
__hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:3590
typename remove_const< T >::type type
Definition: Util.h:431
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4112
__hostdev__ void setValue(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4293
__hostdev__ Checksum()
default constructor initiates checksum to EMPTY
Definition: NanoVDB.h:1720
uint64_t Type
Definition: NanoVDB.h:491
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3926
__hostdev__ ValueIterator(const InternalNode *parent)
Definition: NanoVDB.h:3355
typename Mask< 3 >::template Iterator< ON > MaskIterT
Definition: NanoVDB.h:4346
GridType mDataType
Definition: NanoVDB.h:1530
Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
Definition: NanoVDB.h:4327
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5149
__hostdev__ DataType * data()
Definition: NanoVDB.h:2895
__hostdev__ const uint64_t & valueCount() const
Return total number of values indexed by the IndexGrid.
Definition: NanoVDB.h:6156
__hostdev__ NodeTrait< RootT, LEVEL >::type * getFirstNode()
return a pointer to the first node at the specified level
Definition: NanoVDB.h:2428
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2547
__hostdev__ Vec3T worldToIndexDir(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2096
__hostdev__ DenseIterator cbeginChildAll() const
Definition: NanoVDB.h:3459
BuildT BuildType
Definition: NanoVDB.h:5274
__hostdev__ uint32_t rootTableSize() const
return the root table has size
Definition: NanoVDB.h:1984
bool FloatType
Definition: NanoVDB.h:4050
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:4585
double mTaperD
Definition: NanoVDB.h:1358
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3455
uint32_t dim
Definition: NanoVDB.h:6595
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:2916
MaskT mChildMask
Definition: NanoVDB.h:3187
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:4575
__hostdev__ Version()
Default constructor.
Definition: NanoVDB.h:684
__hostdev__ void setMinMaxOn(bool on=true)
Definition: NanoVDB.h:1877
static __hostdev__ uint32_t valueCount()
Definition: NanoVDB.h:4191
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3752
__hostdev__ const Tile * tile(uint32_t n) const
Returns a non-const reference to the tile at the specified linear offset.
Definition: NanoVDB.h:2559
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:3250
__hostdev__ ValueType getFirstValue() const
Return the first value in this leaf node.
Definition: NanoVDB.h:4558
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:3414
typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1646
__hostdev__ ValueOnIterator(const InternalNode *parent)
Definition: NanoVDB.h:3394
__hostdev__ ConstValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:2831
Definition: NanoVDB.h:2529
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5108
FloatType mAverage
Definition: NanoVDB.h:3756
BuildT BuildType
Definition: NanoVDB.h:4941
ValueT ValueType
Definition: NanoVDB.h:5275
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:3561
float Type
Definition: NanoVDB.h:505
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2643
StatsT mStdDevi
Definition: NanoVDB.h:2519
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4171
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2037
__hostdev__ uint32_t & head()
Definition: NanoVDB.h:1746
ValueT value
Definition: NanoVDB.h:3175
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:4281
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:5949
CoordBBox indexBBox
Definition: NanoVDB.h:6283
__hostdev__ uint32_t rootTableSize() const
Definition: NanoVDB.h:5968
__hostdev__ bool isCached1(const CoordType &ijk) const
Definition: NanoVDB.h:5352
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:3237
__hostdev__ ValueOnIterator beginValueOn()
Definition: NanoVDB.h:2830
__hostdev__ const ChildT * getChild(const Tile *tile) const
Definition: NanoVDB.h:2608
__hostdev__ bool isEmpty() const
return true if the 64 bit checksum is disables (unset)
Definition: NanoVDB.h:1759
__hostdev__ Iterator(uint32_t pos, const Mask *parent)
Definition: NanoVDB.h:1076
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5283
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:3781
typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1647
__hostdev__ void * nodePtr()
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:1932
__hostdev__ float getMin() const
return the quantized minimum of the active values in this node
Definition: NanoVDB.h:3846
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5151
__hostdev__ ValueOffIterator()
Definition: NanoVDB.h:4388
ChildT ChildNodeType
Definition: NanoVDB.h:3282
typename DataType::BuildT BuildType
Definition: NanoVDB.h:3280
__hostdev__ ValueOffIterator cbeginValueOff() const
Definition: NanoVDB.h:4412
uint32_t mSize
Definition: NanoVDB.h:2664
__hostdev__ GridClass toGridClass(GridClass defaultClass=GridClass::Unknown)
Maps from a templated build type to a GridClass enum.
Definition: NanoVDB.h:878
typename DataType::ValueType ValueType
Definition: NanoVDB.h:4338
float type
Definition: NanoVDB.h:527
__hostdev__ uint32_t getMajor() const
Definition: NanoVDB.h:706
__hostdev__ Vec3T indexToWorldGradF(const Vec3T &grad) const
Transforms the gradient from index space to world space.
Definition: NanoVDB.h:2124
__hostdev__ const NodeTrait< TreeT, LEVEL >::type * getNode() const
Definition: NanoVDB.h:5624
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:2151
uint64_t type
Definition: NanoVDB.h:541
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:4302
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3281
__hostdev__ void setValue(const CoordType &k, bool s, const ValueType &v)
Definition: NanoVDB.h:2539
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5677
__hostdev__ const uint64_t * words() const
Definition: NanoVDB.h:1160
__hostdev__ const GridBlindMetaData & blindMetaData(uint32_t n) const
Definition: NanoVDB.h:2217
static __hostdev__ uint32_t bitCount()
Return the number of bits available in this Mask.
Definition: NanoVDB.h:1044
__hostdev__ void setDev(const bool &)
Definition: NanoVDB.h:4078
uint64_t Type
Definition: NanoVDB.h:484
__hostdev__ Vec3T indexToWorldDir(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2091
__hostdev__ const void * getRoot() const
Get a const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2272
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:3251
__hostdev__ bool isActive() const
Return true if any of the voxel value are active in this leaf node.
Definition: NanoVDB.h:4578
GridBlindDataClass
Blind-data Classes that are currently supported by NanoVDB.
Definition: NanoVDB.h:416
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4271
__hostdev__ const void * treePtr() const
Definition: NanoVDB.h:1915
static __hostdev__ size_t memUsage()
Return the memory footprint in bytes of this Mask.
Definition: NanoVDB.h:1041
const typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1654
Visits all active values in a leaf node.
Definition: NanoVDB.h:4349
__hostdev__ const LeafNodeType * getFirstLeaf() const
Definition: NanoVDB.h:2444
__hostdev__ Vec3T indexToWorldDirF(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2114