OpenVDB  12.1.1
NanoVDB.h
Go to the documentation of this file.
1 // Copyright Contributors to the OpenVDB Project
2 // SPDX-License-Identifier: Apache-2.0
3 
4 /*!
5  \file nanovdb/NanoVDB.h
6 
7  \author Ken Museth
8 
9  \date January 8, 2020
10 
11  \brief Implements a light-weight self-contained VDB data-structure in a
12  single file! In other words, this is a significantly watered-down
13  version of the OpenVDB implementation, with few dependencies - so
14  a one-stop-shop for a minimalistic VDB data structure that run on
15  most platforms!
16 
17  \note It is important to note that NanoVDB (by design) is a read-only
18  sparse GPU (and CPU) friendly data structure intended for applications
19  like rendering and collision detection. As such it obviously lacks
20  a lot of the functionality and features of OpenVDB grids. NanoVDB
21  is essentially a compact linearized (or serialized) representation of
22  an OpenVDB tree with getValue methods only. For best performance use
23  the ReadAccessor::getValue method as opposed to the Tree::getValue
24  method. Note that since a ReadAccessor caches previous access patterns
25  it is by design not thread-safe, so use one instantiation per thread
26  (it is very light-weight). Also, it is not safe to copy accessors between
27  the GPU and CPU! In fact, client code should only interface
28  with the API of the Grid class (all other nodes of the NanoVDB data
29  structure can safely be ignored by most client codes)!
30 
31 
32  \warning NanoVDB grids can only be constructed via tools like createNanoGrid
33  or the GridBuilder. This explains why none of the grid nodes defined below
34  have public constructors or destructors.
35 
36  \details Please see the following paper for more details on the data structure:
37  K. Museth, “VDB: High-Resolution Sparse Volumes with Dynamic Topology”,
38  ACM Transactions on Graphics 32(3), 2013, which can be found here:
39  http://www.museth.org/Ken/Publications_files/Museth_TOG13.pdf
40 
41  NanoVDB was first published there: https://dl.acm.org/doi/fullHtml/10.1145/3450623.3464653
42 
43 
44  Overview: This file implements the following fundamental class that when combined
45  forms the backbone of the VDB tree data structure:
46 
47  Coord- a signed integer coordinate
48  Vec3 - a 3D vector
49  Vec4 - a 4D vector
50  BBox - a bounding box
51  Mask - a bitmask essential to the non-root tree nodes
52  Map - an affine coordinate transformation
53  Grid - contains a Tree and a map for world<->index transformations. Use
54  this class as the main API with client code!
55  Tree - contains a RootNode and getValue methods that should only be used for debugging
56  RootNode - the top-level node of the VDB data structure
57  InternalNode - the internal nodes of the VDB data structure
58  LeafNode - the lowest level tree nodes that encode voxel values and state
59  ReadAccessor - implements accelerated random access operations
60 
61  Semantics: A VDB data structure encodes values and (binary) states associated with
62  signed integer coordinates. Values encoded at the leaf node level are
63  denoted voxel values, and values associated with other tree nodes are referred
64  to as tile values, which by design cover a larger coordinate index domain.
65 
66 
67  Memory layout:
68 
69  It's important to emphasize that all the grid data (defined below) are explicitly 32 byte
70  aligned, which implies that any memory buffer that contains a NanoVDB grid must also be at
71  32 byte aligned. That is, the memory address of the beginning of a buffer (see ascii diagram below)
72  must be divisible by 32, i.e. uintptr_t(&buffer)%32 == 0! If this is not the case, the C++ standard
73  says the behaviour is undefined! Normally this is not a concerns on GPUs, because they use 256 byte
74  aligned allocations, but the same cannot be said about the CPU.
75 
76  GridData is always at the very beginning of the buffer immediately followed by TreeData!
77  The remaining nodes and blind-data are allowed to be scattered throughout the buffer,
78  though in practice they are arranged as:
79 
80  GridData: 672 bytes (e.g. magic, checksum, major, flags, index, count, size, name, map, world bbox, voxel size, class, type, offset, count)
81 
82  TreeData: 64 bytes (node counts and byte offsets)
83 
84  ... optional padding ...
85 
86  RootData: size depends on ValueType (index bbox, voxel count, tile count, min/max/avg/standard deviation)
87 
88  Array of: RootData::Tile
89 
90  ... optional padding ...
91 
92  Array of: Upper InternalNodes of size 32^3: bbox, two bit masks, 32768 tile values, and min/max/avg/standard deviation values
93 
94  ... optional padding ...
95 
96  Array of: Lower InternalNodes of size 16^3: bbox, two bit masks, 4096 tile values, and min/max/avg/standard deviation values
97 
98  ... optional padding ...
99 
100  Array of: LeafNodes of size 8^3: bbox, bit masks, 512 voxel values, and min/max/avg/standard deviation values
101 
102  ... optional padding ...
103 
104  Array of: GridBlindMetaData (288 bytes). The offset and count are defined in GridData::mBlindMetadataOffset and GridData::mBlindMetadataCount
105 
106  ... optional padding ...
107 
108  Array of: blind data
109 
110  Notation: "]---[" implies it has optional padding, and "][" implies zero padding
111 
112  [GridData(672B)][TreeData(64B)]---[RootData][N x Root::Tile]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
113  ^ ^ ^ ^ ^ ^ ^
114  | | | | | | GridBlindMetaData*
115  +-- Start of 32B aligned buffer | | | | +-- Node0::DataType* leafData
116  GridType::DataType* gridData | | | |
117  | | | +-- Node1::DataType* lowerData
118  RootType::DataType* rootData --+ | |
119  | +-- Node2::DataType* upperData
120  |
121  +-- RootType::DataType::Tile* tile
122 
123 */
124 
125 #ifndef NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
126 #define NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
127 
128 // The following two header files are the only mandatory dependencies
129 #include <nanovdb/util/Util.h>// for __hostdev__ and lots of other utility functions
130 #include <nanovdb/math/Math.h>// for Coord, BBox, Vec3, Vec4 etc
131 
132 // Do not change this value! 32 byte alignment is fixed in NanoVDB
133 #define NANOVDB_DATA_ALIGNMENT 32
134 
135 // NANOVDB_MAGIC_NUMB previously used for both grids and files (starting with v32.6.0)
136 // NANOVDB_MAGIC_GRID currently used exclusively for grids (serialized to a single buffer)
137 // NANOVDB_MAGIC_FILE currently used exclusively for files
138 // | : 0 in 30 corresponds to 0 in NanoVDB0
139 #define NANOVDB_MAGIC_NUMB 0x304244566f6e614eUL // "NanoVDB0" in hex - little endian (uint64_t)
140 #define NANOVDB_MAGIC_GRID 0x314244566f6e614eUL // "NanoVDB1" in hex - little endian (uint64_t)
141 #define NANOVDB_MAGIC_FILE 0x324244566f6e614eUL // "NanoVDB2" in hex - little endian (uint64_t)
142 #define NANOVDB_MAGIC_MASK 0x00FFFFFFFFFFFFFFUL // use this mask to remove the number
143 
144 #define NANOVDB_USE_NEW_MAGIC_NUMBERS// enables use of the new magic numbers described above
145 
146 #define NANOVDB_MAJOR_VERSION_NUMBER 32 // reflects changes to the ABI and hence also the file format
147 #define NANOVDB_MINOR_VERSION_NUMBER 8 // reflects changes to the API but not ABI
148 #define NANOVDB_PATCH_VERSION_NUMBER 0 // reflects changes that does not affect the ABI or API
149 
150 #define TBB_SUPPRESS_DEPRECATED_MESSAGES 1
151 
152 // This replaces a Coord key at the root level with a single uint64_t
153 #define NANOVDB_USE_SINGLE_ROOT_KEY
154 
155 // This replaces three levels of Coord keys in the ReadAccessor with one Coord
156 //#define NANOVDB_USE_SINGLE_ACCESSOR_KEY
157 
158 // Use this to switch between std::ofstream or FILE implementations
159 //#define NANOVDB_USE_IOSTREAMS
160 
161 #define NANOVDB_FPN_BRANCHLESS
162 
163 #if !defined(NANOVDB_ALIGN)
164 #define NANOVDB_ALIGN(n) alignas(n)
165 #endif // !defined(NANOVDB_ALIGN)
166 
167 namespace nanovdb {// =================================================================
168 
169 // --------------------------> Build types <------------------------------------
170 
171 /// @brief Dummy type for a voxel whose value equals an offset into an external value array
172 class ValueIndex{};
173 
174 /// @brief Dummy type for a voxel whose value equals an offset into an external value array of active values
175 class ValueOnIndex{};
176 
177 /// @brief Like @c ValueIndex but with a mutable mask
179 
180 /// @brief Like @c ValueOnIndex but with a mutable mask
182 
183 /// @brief Dummy type for a voxel whose value equals its binary active state
184 class ValueMask{};
185 
186 /// @brief Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
187 class Half{};
188 
189 /// @brief Dummy type for a 4bit quantization of float point values
190 class Fp4{};
191 
192 /// @brief Dummy type for a 8bit quantization of float point values
193 class Fp8{};
194 
195 /// @brief Dummy type for a 16bit quantization of float point values
196 class Fp16{};
197 
198 /// @brief Dummy type for a variable bit quantization of floating point values
199 class FpN{};
200 
201 /// @brief Dummy type for indexing points into voxels
202 class Point{};
203 
204 // --------------------------> GridType <------------------------------------
205 
206 /// @brief return the number of characters (including null termination) required to convert enum type to a string
207 ///
208 /// @note This curious implementation, which subtracts End from StrLen, avoids duplicate values in the enum!
209 template <class EnumT>
210 __hostdev__ inline constexpr uint32_t strlen(){return (uint32_t)EnumT::StrLen - (uint32_t)EnumT::End;}
211 
212 /// @brief List of types that are currently supported by NanoVDB
213 ///
214 /// @note To expand on this list do:
215 /// 1) Add the new type between Unknown and End in the enum below
216 /// 2) Add the new type to OpenToNanoVDB::processGrid that maps OpenVDB types to GridType
217 /// 3) Verify that the ConvertTrait in NanoToOpenVDB.h works correctly with the new type
218 /// 4) Add the new type to toGridType (defined below) that maps NanoVDB types to GridType
219 /// 5) Add the new type to toStr (defined below)
220 enum class GridType : uint32_t { Unknown = 0, // unknown value type - should rarely be used
221  Float = 1, // single precision floating point value
222  Double = 2, // double precision floating point value
223  Int16 = 3, // half precision signed integer value
224  Int32 = 4, // single precision signed integer value
225  Int64 = 5, // double precision signed integer value
226  Vec3f = 6, // single precision floating 3D vector
227  Vec3d = 7, // double precision floating 3D vector
228  Mask = 8, // no value, just the active state
229  Half = 9, // half precision floating point value (placeholder for IEEE 754 Half)
230  UInt32 = 10, // single precision unsigned integer value
231  Boolean = 11, // boolean value, encoded in bit array
232  RGBA8 = 12, // RGBA packed into 32bit word in reverse-order, i.e. R is lowest byte.
233  Fp4 = 13, // 4bit quantization of floating point value
234  Fp8 = 14, // 8bit quantization of floating point value
235  Fp16 = 15, // 16bit quantization of floating point value
236  FpN = 16, // variable bit quantization of floating point value
237  Vec4f = 17, // single precision floating 4D vector
238  Vec4d = 18, // double precision floating 4D vector
239  Index = 19, // index into an external array of active and inactive values
240  OnIndex = 20, // index into an external array of active values
241  IndexMask = 21, // like Index but with a mutable mask
242  OnIndexMask = 22, // like OnIndex but with a mutable mask
243  PointIndex = 23, // voxels encode indices to co-located points
244  Vec3u8 = 24, // 8bit quantization of floating point 3D vector (only as blind data)
245  Vec3u16 = 25, // 16bit quantization of floating point 3D vector (only as blind data)
246  UInt8 = 26, // 8 bit unsigned integer values (eg 0 -> 255 gray scale)
247  End = 27,// total number of types in this enum (excluding StrLen since it's not a type)
248  StrLen = End + 12};// this entry is used to determine the minimum size of c-string
249 
250 /// @brief Maps a GridType to a c-string
251 /// @param dst destination string of size 12 or larger
252 /// @param gridType GridType enum to be mapped to a string
253 /// @return Retuns a c-string used to describe a GridType
254 __hostdev__ inline char* toStr(char *dst, GridType gridType)
255 {
256  switch (gridType){
257  case GridType::Unknown: return util::strcpy(dst, "?");
258  case GridType::Float: return util::strcpy(dst, "float");
259  case GridType::Double: return util::strcpy(dst, "double");
260  case GridType::Int16: return util::strcpy(dst, "int16");
261  case GridType::Int32: return util::strcpy(dst, "int32");
262  case GridType::Int64: return util::strcpy(dst, "int64");
263  case GridType::Vec3f: return util::strcpy(dst, "Vec3f");
264  case GridType::Vec3d: return util::strcpy(dst, "Vec3d");
265  case GridType::Mask: return util::strcpy(dst, "Mask");
266  case GridType::Half: return util::strcpy(dst, "Half");
267  case GridType::UInt32: return util::strcpy(dst, "uint32");
268  case GridType::Boolean: return util::strcpy(dst, "bool");
269  case GridType::RGBA8: return util::strcpy(dst, "RGBA8");
270  case GridType::Fp4: return util::strcpy(dst, "Float4");
271  case GridType::Fp8: return util::strcpy(dst, "Float8");
272  case GridType::Fp16: return util::strcpy(dst, "Float16");
273  case GridType::FpN: return util::strcpy(dst, "FloatN");
274  case GridType::Vec4f: return util::strcpy(dst, "Vec4f");
275  case GridType::Vec4d: return util::strcpy(dst, "Vec4d");
276  case GridType::Index: return util::strcpy(dst, "Index");
277  case GridType::OnIndex: return util::strcpy(dst, "OnIndex");
278  case GridType::IndexMask: return util::strcpy(dst, "IndexMask");
279  case GridType::OnIndexMask: return util::strcpy(dst, "OnIndexMask");// StrLen = 11 + 1 + End
280  case GridType::PointIndex: return util::strcpy(dst, "PointIndex");
281  case GridType::Vec3u8: return util::strcpy(dst, "Vec3u8");
282  case GridType::Vec3u16: return util::strcpy(dst, "Vec3u16");
283  case GridType::UInt8: return util::strcpy(dst, "uint8");
284  default: return util::strcpy(dst, "End");
285  }
286 }
287 
288 // --------------------------> GridClass <------------------------------------
289 
290 /// @brief Classes (superset of OpenVDB) that are currently supported by NanoVDB
291 enum class GridClass : uint32_t { Unknown = 0,
292  LevelSet = 1, // narrow band level set, e.g. SDF
293  FogVolume = 2, // fog volume, e.g. density
294  Staggered = 3, // staggered MAC grid, e.g. velocity
295  PointIndex = 4, // point index grid
296  PointData = 5, // point data grid
297  Topology = 6, // grid with active states only (no values)
298  VoxelVolume = 7, // volume of geometric cubes, e.g. colors cubes in Minecraft
299  IndexGrid = 8, // grid whose values are offsets, e.g. into an external array
300  TensorGrid = 9, // Index grid for indexing learnable tensor features
301  End = 10,// total number of types in this enum (excluding StrLen since it's not a type)
302  StrLen = End + 7};// this entry is used to determine the minimum size of c-string
303 
304 
305 /// @brief Retuns a c-string used to describe a GridClass
306 /// @param dst destination string of size 7 or larger
307 /// @param gridClass GridClass enum to be converted to a string
308 __hostdev__ inline char* toStr(char *dst, GridClass gridClass)
309 {
310  switch (gridClass){
311  case GridClass::Unknown: return util::strcpy(dst, "?");
312  case GridClass::LevelSet: return util::strcpy(dst, "SDF");
313  case GridClass::FogVolume: return util::strcpy(dst, "FOG");
314  case GridClass::Staggered: return util::strcpy(dst, "MAC");
315  case GridClass::PointIndex: return util::strcpy(dst, "PNTIDX");// StrLen = 6 + 1 + End
316  case GridClass::PointData: return util::strcpy(dst, "PNTDAT");
317  case GridClass::Topology: return util::strcpy(dst, "TOPO");
318  case GridClass::VoxelVolume: return util::strcpy(dst, "VOX");
319  case GridClass::IndexGrid: return util::strcpy(dst, "INDEX");
320  case GridClass::TensorGrid: return util::strcpy(dst, "TENSOR");
321  default: return util::strcpy(dst, "END");
322  }
323 }
324 
325 // --------------------------> GridFlags <------------------------------------
326 
327 /// @brief Grid flags which indicate what extra information is present in the grid buffer.
328 enum class GridFlags : uint32_t {
329  HasLongGridName = 1 << 0, // grid name is longer than 256 characters
330  HasBBox = 1 << 1, // nodes contain bounding-boxes of active values
331  HasMinMax = 1 << 2, // nodes contain min/max of active values
332  HasAverage = 1 << 3, // nodes contain averages of active values
333  HasStdDeviation = 1 << 4, // nodes contain standard deviations of active values
334  IsBreadthFirst = 1 << 5, // nodes are typically arranged breadth-first in memory
335  End = 1 << 6, // use End - 1 as a mask for the 5 lower bit flags
336  StrLen = End + 23,// this entry is used to determine the minimum size of c-string
337 };
338 
339 /// @brief Retuns a c-string used to describe a GridFlags
340 /// @param dst destination string of size 23 or larger
341 /// @param gridFlags GridFlags enum to be converted to a string
342 __hostdev__ inline const char* toStr(char *dst, GridFlags gridFlags)
343 {
344  switch (gridFlags){
345  case GridFlags::HasLongGridName: return util::strcpy(dst, "has long grid name");
346  case GridFlags::HasBBox: return util::strcpy(dst, "has bbox");
347  case GridFlags::HasMinMax: return util::strcpy(dst, "has min/max");
348  case GridFlags::HasAverage: return util::strcpy(dst, "has average");
349  case GridFlags::HasStdDeviation: return util::strcpy(dst, "has standard deviation");// StrLen = 22 + 1 + End
350  case GridFlags::IsBreadthFirst: return util::strcpy(dst, "is breadth-first");
351  default: return util::strcpy(dst, "end");
352  }
353 }
354 
355 // --------------------------> MagicType <------------------------------------
356 
357 /// @brief Enums used to identify magic numbers recognized by NanoVDB
358 enum class MagicType : uint32_t { Unknown = 0,// first 64 bits are neither of the cases below
359  OpenVDB = 1,// first 32 bits = 0x56444220UL
360  NanoVDB = 2,// first 64 bits = NANOVDB_MAGIC_NUMB
361  NanoGrid = 3,// first 64 bits = NANOVDB_MAGIC_GRID
362  NanoFile = 4,// first 64 bits = NANOVDB_MAGIC_FILE
363  End = 5,
364  StrLen = End + 14};// this entry is used to determine the minimum size of c-string
365 
366 /// @brief maps 64 bits of magic number to enum
367 __hostdev__ inline MagicType toMagic(uint64_t magic)
368 {
369  switch (magic){
373  default: return (magic & ~uint32_t(0)) == 0x56444220UL ? MagicType::OpenVDB : MagicType::Unknown;
374  }
375 }
376 
377 /// @brief print 64-bit magic number to string
378 /// @param dst destination string of size 25 or larger
379 /// @param magic 64 bit magic number to be printed
380 /// @return return destination string @c dst
381 __hostdev__ inline char* toStr(char *dst, MagicType magic)
382 {
383  switch (magic){
384  case MagicType::Unknown: return util::strcpy(dst, "unknown");
385  case MagicType::NanoVDB: return util::strcpy(dst, "nanovdb");
386  case MagicType::NanoGrid: return util::strcpy(dst, "nanovdb::Grid");// StrLen = 13 + 1 + End
387  case MagicType::NanoFile: return util::strcpy(dst, "nanovdb::File");
388  case MagicType::OpenVDB: return util::strcpy(dst, "openvdb");
389  default: return util::strcpy(dst, "end");
390  }
391 }
392 
393 // --------------------------> PointType enums <------------------------------------
394 
395 // Define the type used when the points are encoded as blind data in the output grid
396 enum class PointType : uint32_t { Disable = 0,// no point information e.g. when BuildT != Point
397  PointID = 1,// linear index of type uint32_t to points
398  World64 = 2,// Vec3d in world space
399  World32 = 3,// Vec3f in world space
400  Grid64 = 4,// Vec3d in grid space
401  Grid32 = 5,// Vec3f in grid space
402  Voxel32 = 6,// Vec3f in voxel space
403  Voxel16 = 7,// Vec3u16 in voxel space
404  Voxel8 = 8,// Vec3u8 in voxel space
405  Default = 9,// output matches input, i.e. Vec3d or Vec3f in world space
406  End =10 };
407 
408 // --------------------------> GridBlindData enums <------------------------------------
409 
410 /// @brief Blind-data Classes that are currently supported by NanoVDB
411 enum class GridBlindDataClass : uint32_t { Unknown = 0,
412  IndexArray = 1,
413  AttributeArray = 2,
414  GridName = 3,
415  ChannelArray = 4,
416  End = 5 };
417 
418 /// @brief Blind-data Semantics that are currently understood by NanoVDB
419 enum class GridBlindDataSemantic : uint32_t { Unknown = 0,
420  PointPosition = 1, // 3D coordinates in an unknown space
421  PointColor = 2,
422  PointNormal = 3,
423  PointRadius = 4,
424  PointVelocity = 5,
425  PointId = 6,
426  WorldCoords = 7, // 3D coordinates in world space, e.g. (0.056, 0.8, 1,8)
427  GridCoords = 8, // 3D coordinates in grid space, e.g. (1.2, 4.0, 5.7), aka index-space
428  VoxelCoords = 9, // 3D coordinates in voxel space, e.g. (0.2, 0.0, 0.7)
429  End = 10 };
430 
431 // --------------------------> BuildTraits <------------------------------------
432 
433 /// @brief Define static boolean tests for template build types
434 template<typename T>
436 {
437  // check if T is an index type
440  static constexpr bool is_offindex = util::is_same<T, ValueIndex, ValueIndexMask>::value;
441  static constexpr bool is_indexmask = util::is_same<T, ValueIndexMask, ValueOnIndexMask>::value;
442  // check if T is a compressed float type with fixed bit precision
443  static constexpr bool is_FpX = util::is_same<T, Fp4, Fp8, Fp16>::value;
444  // check if T is a compressed float type with fixed or variable bit precision
445  static constexpr bool is_Fp = util::is_same<T, Fp4, Fp8, Fp16, FpN>::value;
446  // check if T is a POD float type, i.e float or double
447  static constexpr bool is_float = util::is_floating_point<T>::value;
448  // check if T is a template specialization of LeafData<T>, i.e. has T mValues[512]
449  static constexpr bool is_special = is_index || is_Fp || util::is_same<T, Point, bool, ValueMask>::value;
450 }; // BuildTraits
451 
452 // --------------------------> BuildToValueMap <------------------------------------
453 
454 /// @brief Maps one type (e.g. the build types above) to other (actual) types
455 template<typename T>
457 {
458  using Type = T;
459  using type = T;
460 };
461 
462 template<>
464 {
465  using Type = uint64_t;
466  using type = uint64_t;
467 };
468 
469 template<>
471 {
472  using Type = uint64_t;
473  using type = uint64_t;
474 };
475 
476 template<>
478 {
479  using Type = uint64_t;
480  using type = uint64_t;
481 };
482 
483 template<>
485 {
486  using Type = uint64_t;
487  using type = uint64_t;
488 };
489 
490 template<>
492 {
493  using Type = bool;
494  using type = bool;
495 };
496 
497 template<>
499 {
500  using Type = float;
501  using type = float;
502 };
503 
504 template<>
506 {
507  using Type = float;
508  using type = float;
509 };
510 
511 template<>
513 {
514  using Type = float;
515  using type = float;
516 };
517 
518 template<>
520 {
521  using Type = float;
522  using type = float;
523 };
524 
525 template<>
527 {
528  using Type = float;
529  using type = float;
530 };
531 
532 template<>
534 {
535  using Type = uint64_t;
536  using type = uint64_t;
537 };
538 
539 // --------------------------> utility functions related to alignment <------------------------------------
540 
541 /// @brief return true if the specified pointer is 32 byte aligned
542 __hostdev__ inline static bool isAligned(const void* p){return uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;}
543 
544 /// @brief return the smallest number of bytes that when added to the specified pointer results in a 32 byte aligned pointer.
545 __hostdev__ inline static uint64_t alignmentPadding(const void* p)
546 {
547  NANOVDB_ASSERT(p);
549 }
550 
551 /// @brief offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers.
552 template <typename T>
553 __hostdev__ inline static T* alignPtr(T* p){return util::PtrAdd<T>(p, alignmentPadding(p));}
554 
555 // --------------------------> isFloatingPoint(GridType) <------------------------------------
556 
557 /// @brief return true if the GridType maps to a floating point type
558 __hostdev__ inline bool isFloatingPoint(GridType gridType)
559 {
560  return gridType == GridType::Float ||
561  gridType == GridType::Double ||
562  gridType == GridType::Half ||
563  gridType == GridType::Fp4 ||
564  gridType == GridType::Fp8 ||
565  gridType == GridType::Fp16 ||
566  gridType == GridType::FpN;
567 }
568 
569 // --------------------------> isFloatingPointVector(GridType) <------------------------------------
570 
571 /// @brief return true if the GridType maps to a floating point vec3.
573 {
574  return gridType == GridType::Vec3f ||
575  gridType == GridType::Vec3d ||
576  gridType == GridType::Vec4f ||
577  gridType == GridType::Vec4d;
578 }
579 
580 // --------------------------> isInteger(GridType) <------------------------------------
581 
582 /// @brief Return true if the GridType maps to a POD integer type.
583 /// @details These types are used to associate a voxel with a POD integer type
584 __hostdev__ inline bool isInteger(GridType gridType)
585 {
586  return gridType == GridType::Int16 ||
587  gridType == GridType::Int32 ||
588  gridType == GridType::Int64 ||
589  gridType == GridType::UInt32||
590  gridType == GridType::UInt8;
591 }
592 
593 // --------------------------> isIndex(GridType) <------------------------------------
594 
595 /// @brief Return true if the GridType maps to a special index type (not a POD integer type).
596 /// @details These types are used to index from a voxel into an external array of values, e.g. sidecar or blind data.
597 __hostdev__ inline bool isIndex(GridType gridType)
598 {
599  return gridType == GridType::Index ||// index both active and inactive values
600  gridType == GridType::OnIndex ||// index active values only
601  gridType == GridType::IndexMask ||// as Index, but with an additional mask
602  gridType == GridType::OnIndexMask;// as OnIndex, but with an additional mask
603 }
604 
605 // --------------------------> isValue(GridType, GridClass) <------------------------------------
606 
607 /// @brief return true if the combination of GridType and GridClass is valid.
608 __hostdev__ inline bool isValid(GridType gridType, GridClass gridClass)
609 {
610  if (gridClass == GridClass::LevelSet || gridClass == GridClass::FogVolume) {
611  return isFloatingPoint(gridType);
612  } else if (gridClass == GridClass::Staggered) {
613  return isFloatingPointVector(gridType);
614  } else if (gridClass == GridClass::PointIndex || gridClass == GridClass::PointData) {
615  return gridType == GridType::PointIndex || gridType == GridType::UInt32;
616  } else if (gridClass == GridClass::Topology) {
617  return gridType == GridType::Mask;
618  } else if (gridClass == GridClass::IndexGrid) {
619  return isIndex(gridType);
620  } else if (gridClass == GridClass::VoxelVolume) {
621  return gridType == GridType::RGBA8 || gridType == GridType::Float ||
622  gridType == GridType::Double || gridType == GridType::Vec3f ||
623  gridType == GridType::Vec3d || gridType == GridType::UInt32 ||
624  gridType == GridType::UInt8;
625  }
626  return gridClass < GridClass::End && gridType < GridType::End; // any valid combination
627 }
628 
629 // --------------------------> validation of blind data meta data <------------------------------------
630 
631 /// @brief return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid.
632 __hostdev__ inline bool isValid(const GridBlindDataClass& blindClass,
633  const GridBlindDataSemantic& blindSemantics,
634  const GridType& blindType)
635 {
636  bool test = false;
637  switch (blindClass) {
639  test = (blindSemantics == GridBlindDataSemantic::Unknown ||
640  blindSemantics == GridBlindDataSemantic::PointId) &&
641  isInteger(blindType);
642  break;
644  if (blindSemantics == GridBlindDataSemantic::PointPosition ||
645  blindSemantics == GridBlindDataSemantic::WorldCoords) {
646  test = blindType == GridType::Vec3f || blindType == GridType::Vec3d;
647  } else if (blindSemantics == GridBlindDataSemantic::GridCoords) {
648  test = blindType == GridType::Vec3f;
649  } else if (blindSemantics == GridBlindDataSemantic::VoxelCoords) {
650  test = blindType == GridType::Vec3f || blindType == GridType::Vec3u8 || blindType == GridType::Vec3u16;
651  } else {
652  test = blindSemantics != GridBlindDataSemantic::PointId;
653  }
654  break;
656  test = blindSemantics == GridBlindDataSemantic::Unknown && blindType == GridType::Unknown;
657  break;
658  default: // captures blindClass == Unknown and ChannelArray
659  test = blindClass < GridBlindDataClass::End &&
660  blindSemantics < GridBlindDataSemantic::End &&
661  blindType < GridType::End; // any valid combination
662  break;
663  }
664  //if (!test) printf("Invalid combination: GridBlindDataClass=%u, GridBlindDataSemantic=%u, GridType=%u\n",(uint32_t)blindClass, (uint32_t)blindSemantics, (uint32_t)blindType);
665  return test;
666 }
667 
668 // ----------------------------> Version class <-------------------------------------
669 
670 /// @brief Bit-compacted representation of all three version numbers
671 ///
672 /// @details major is the top 11 bits, minor is the 11 middle bits and patch is the lower 10 bits
673 class Version
674 {
675  uint32_t mData; // 11 + 11 + 10 bit packing of major + minor + patch
676 public:
677  static constexpr uint32_t End = 0, StrLen = 8;// for strlen<Version>()
678  /// @brief Default constructor
680  : mData(uint32_t(NANOVDB_MAJOR_VERSION_NUMBER) << 21 |
681  uint32_t(NANOVDB_MINOR_VERSION_NUMBER) << 10 |
683  {
684  }
685  /// @brief Constructor from a raw uint32_t data representation
686  __hostdev__ Version(uint32_t data) : mData(data) {}
687  /// @brief Constructor from major.minor.patch version numbers
688  __hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
689  : mData(major << 21 | minor << 10 | patch)
690  {
691  NANOVDB_ASSERT(major < (1u << 11)); // max value of major is 2047
692  NANOVDB_ASSERT(minor < (1u << 11)); // max value of minor is 2047
693  NANOVDB_ASSERT(patch < (1u << 10)); // max value of patch is 1023
694  }
695  __hostdev__ bool operator==(const Version& rhs) const { return mData == rhs.mData; }
696  __hostdev__ bool operator<( const Version& rhs) const { return mData < rhs.mData; }
697  __hostdev__ bool operator<=(const Version& rhs) const { return mData <= rhs.mData; }
698  __hostdev__ bool operator>( const Version& rhs) const { return mData > rhs.mData; }
699  __hostdev__ bool operator>=(const Version& rhs) const { return mData >= rhs.mData; }
700  __hostdev__ uint32_t id() const { return mData; }
701  __hostdev__ uint32_t getMajor() const { return (mData >> 21) & ((1u << 11) - 1); }
702  __hostdev__ uint32_t getMinor() const { return (mData >> 10) & ((1u << 11) - 1); }
703  __hostdev__ uint32_t getPatch() const { return mData & ((1u << 10) - 1); }
704  __hostdev__ bool isCompatible() const { return this->getMajor() == uint32_t(NANOVDB_MAJOR_VERSION_NUMBER); }
705  /// @brief Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER
706  /// @return return 0 if the major version equals NANOVDB_MAJOR_VERSION_NUMBER, else a negative age if this
707  /// instance has a smaller major verion (is older), and a positive age if it is newer, i.e. larger.
708  __hostdev__ int age() const {return int(this->getMajor()) - int(NANOVDB_MAJOR_VERSION_NUMBER);}
709 }; // Version
710 
711 /// @brief print the verion number to a c-string
712 /// @param dst destination string of size 8 or more
713 /// @param v version to be printed
714 /// @return returns destination string @c dst
715 __hostdev__ inline char* toStr(char *dst, const Version &v)
716 {
717  return util::sprint(dst, v.getMajor(), ".",v.getMinor(), ".",v.getPatch());
718 }
719 
720 // ----------------------------> TensorTraits <--------------------------------------
721 
722 template<typename T, int Rank = (util::is_specialization<T, math::Vec3>::value || util::is_specialization<T, math::Vec4>::value || util::is_same<T, math::Rgba8>::value) ? 1 : 0>
724 
725 template<typename T>
726 struct TensorTraits<T, 0>
727 {
728  static const int Rank = 0; // i.e. scalar
729  static const bool IsScalar = true;
730  static const bool IsVector = false;
731  static const int Size = 1;
732  using ElementType = T;
733  static T scalar(const T& s) { return s; }
734 };
735 
736 template<typename T>
737 struct TensorTraits<T, 1>
738 {
739  static const int Rank = 1; // i.e. vector
740  static const bool IsScalar = false;
741  static const bool IsVector = true;
742  static const int Size = T::SIZE;
743  using ElementType = typename T::ValueType;
744  static ElementType scalar(const T& v) { return v.length(); }
745 };
746 
747 // ----------------------------> FloatTraits <--------------------------------------
748 
749 template<typename T, int = sizeof(typename TensorTraits<T>::ElementType)>
751 {
752  using FloatType = float;
753 };
754 
755 template<typename T>
756 struct FloatTraits<T, 8>
757 {
758  using FloatType = double;
759 };
760 
761 template<>
762 struct FloatTraits<bool, 1>
763 {
764  using FloatType = bool;
765 };
766 
767 template<>
768 struct FloatTraits<ValueIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
769 {
770  using FloatType = uint64_t;
771 };
772 
773 template<>
774 struct FloatTraits<ValueIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
775 {
776  using FloatType = uint64_t;
777 };
778 
779 template<>
780 struct FloatTraits<ValueOnIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
781 {
782  using FloatType = uint64_t;
783 };
784 
785 template<>
786 struct FloatTraits<ValueOnIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
787 {
788  using FloatType = uint64_t;
789 };
790 
791 template<>
792 struct FloatTraits<ValueMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
793 {
794  using FloatType = bool;
795 };
796 
797 template<>
798 struct FloatTraits<Point, 1> // size of empty class in C++ is 1 byte and not 0 byte
799 {
800  using FloatType = double;
801 };
802 
803 // ----------------------------> mapping BuildType -> GridType <--------------------------------------
804 
805 /// @brief Maps from a templated build type to a GridType enum
806 template<typename BuildT>
808 {
809  if constexpr(util::is_same<BuildT, float>::value) { // resolved at compile-time
810  return GridType::Float;
811  } else if constexpr(util::is_same<BuildT, double>::value) {
812  return GridType::Double;
813  } else if constexpr(util::is_same<BuildT, int16_t>::value) {
814  return GridType::Int16;
815  } else if constexpr(util::is_same<BuildT, int32_t>::value) {
816  return GridType::Int32;
817  } else if constexpr(util::is_same<BuildT, int64_t>::value) {
818  return GridType::Int64;
819  } else if constexpr(util::is_same<BuildT, Vec3f>::value) {
820  return GridType::Vec3f;
821  } else if constexpr(util::is_same<BuildT, Vec3d>::value) {
822  return GridType::Vec3d;
823  } else if constexpr(util::is_same<BuildT, uint32_t>::value) {
824  return GridType::UInt32;
825  } else if constexpr(util::is_same<BuildT, ValueMask>::value) {
826  return GridType::Mask;
827  } else if constexpr(util::is_same<BuildT, Half>::value) {
828  return GridType::Half;
829  } else if constexpr(util::is_same<BuildT, ValueIndex>::value) {
830  return GridType::Index;
831  } else if constexpr(util::is_same<BuildT, ValueOnIndex>::value) {
832  return GridType::OnIndex;
833  } else if constexpr(util::is_same<BuildT, ValueIndexMask>::value) {
834  return GridType::IndexMask;
836  return GridType::OnIndexMask;
837  } else if constexpr(util::is_same<BuildT, bool>::value) {
838  return GridType::Boolean;
839  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
840  return GridType::RGBA8;
841  } else if constexpr(util::is_same<BuildT, Fp4>::value) {
842  return GridType::Fp4;
843  } else if constexpr(util::is_same<BuildT, Fp8>::value) {
844  return GridType::Fp8;
845  } else if constexpr(util::is_same<BuildT, Fp16>::value) {
846  return GridType::Fp16;
847  } else if constexpr(util::is_same<BuildT, FpN>::value) {
848  return GridType::FpN;
849  } else if constexpr(util::is_same<BuildT, Vec4f>::value) {
850  return GridType::Vec4f;
851  } else if constexpr(util::is_same<BuildT, Vec4d>::value) {
852  return GridType::Vec4d;
853  } else if constexpr(util::is_same<BuildT, Point>::value) {
854  return GridType::PointIndex;
855  } else if constexpr(util::is_same<BuildT, Vec3u8>::value) {
856  return GridType::Vec3u8;
857  } else if constexpr(util::is_same<BuildT, Vec3u16>::value) {
858  return GridType::Vec3u16;
859  } else if constexpr(util::is_same<BuildT, uint8_t>::value) {
860  return GridType::UInt8;
861  }
862  return GridType::Unknown;
863 }// toGridType
864 
865 template<typename BuildT>
866 [[deprecated("Use toGridType<T>() instead.")]]
867 __hostdev__ inline GridType mapToGridType(){return toGridType<BuildT>();}
868 
869 // ----------------------------> mapping BuildType -> GridClass <--------------------------------------
870 
871 /// @brief Maps from a templated build type to a GridClass enum
872 template<typename BuildT>
874 {
876  return GridClass::Topology;
877  } else if constexpr(BuildTraits<BuildT>::is_index) {
878  return GridClass::IndexGrid;
879  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
880  return GridClass::VoxelVolume;
881  } else if constexpr(util::is_same<BuildT, Point>::value) {
882  return GridClass::PointIndex;
883  }
884  return defaultClass;
885 }
886 
887 template<typename BuildT>
888 [[deprecated("Use toGridClass<T>() instead.")]]
890 {
891  return toGridClass<BuildT>();
892 }
893 
894 // ----------------------------> BitFlags <--------------------------------------
895 
896 template<int N>
897 struct BitArray;
898 template<>
899 struct BitArray<8>
900 {
901  uint8_t mFlags{0};
902 };
903 template<>
904 struct BitArray<16>
905 {
906  uint16_t mFlags{0};
907 };
908 template<>
909 struct BitArray<32>
910 {
911  uint32_t mFlags{0};
912 };
913 template<>
914 struct BitArray<64>
915 {
916  uint64_t mFlags{0};
917 };
918 
919 template<int N>
920 class BitFlags : public BitArray<N>
921 {
922 protected:
923  using BitArray<N>::mFlags;
924 
925 public:
926  using Type = decltype(mFlags);
927  BitFlags() {}
928  BitFlags(Type mask) : BitArray<N>{mask} {}
929  BitFlags(std::initializer_list<uint8_t> list)
930  {
931  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
932  }
933  template<typename MaskT>
934  BitFlags(std::initializer_list<MaskT> list)
935  {
936  for (auto mask : list) mFlags |= static_cast<Type>(mask);
937  }
938  __hostdev__ Type data() const { return mFlags; }
939  __hostdev__ Type& data() { return mFlags; }
940  __hostdev__ void initBit(std::initializer_list<uint8_t> list)
941  {
942  mFlags = 0u;
943  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
944  }
945  template<typename MaskT>
946  __hostdev__ void initMask(std::initializer_list<MaskT> list)
947  {
948  mFlags = 0u;
949  for (auto mask : list) mFlags |= static_cast<Type>(mask);
950  }
951  __hostdev__ Type getFlags() const { return mFlags & (static_cast<Type>(GridFlags::End) - 1u); } // mask out everything except relevant bits
952 
953  __hostdev__ void setOn() { mFlags = ~Type(0u); }
954  __hostdev__ void setOff() { mFlags = Type(0u); }
955 
956  __hostdev__ void setBitOn(uint8_t bit) { mFlags |= static_cast<Type>(1 << bit); }
957  __hostdev__ void setBitOff(uint8_t bit) { mFlags &= ~static_cast<Type>(1 << bit); }
958 
959  __hostdev__ void setBitOn(std::initializer_list<uint8_t> list)
960  {
961  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
962  }
963  __hostdev__ void setBitOff(std::initializer_list<uint8_t> list)
964  {
965  for (auto bit : list) mFlags &= ~static_cast<Type>(1 << bit);
966  }
967 
968  template<typename MaskT>
969  __hostdev__ void setMaskOn(MaskT mask) { mFlags |= static_cast<Type>(mask); }
970  template<typename MaskT>
971  __hostdev__ void setMaskOff(MaskT mask) { mFlags &= ~static_cast<Type>(mask); }
972 
973  template<typename MaskT>
974  __hostdev__ void setMaskOn(std::initializer_list<MaskT> list)
975  {
976  for (auto mask : list) mFlags |= static_cast<Type>(mask);
977  }
978  template<typename MaskT>
979  __hostdev__ void setMaskOff(std::initializer_list<MaskT> list)
980  {
981  for (auto mask : list) mFlags &= ~static_cast<Type>(mask);
982  }
983 
984  __hostdev__ void setBit(uint8_t bit, bool on) { on ? this->setBitOn(bit) : this->setBitOff(bit); }
985  template<typename MaskT>
986  __hostdev__ void setMask(MaskT mask, bool on) { on ? this->setMaskOn(mask) : this->setMaskOff(mask); }
987 
988  __hostdev__ bool isOn() const { return mFlags == ~Type(0u); }
989  __hostdev__ bool isOff() const { return mFlags == Type(0u); }
990  __hostdev__ bool isBitOn(uint8_t bit) const { return 0 != (mFlags & static_cast<Type>(1 << bit)); }
991  __hostdev__ bool isBitOff(uint8_t bit) const { return 0 == (mFlags & static_cast<Type>(1 << bit)); }
992  template<typename MaskT>
993  __hostdev__ bool isMaskOn(MaskT mask) const { return 0 != (mFlags & static_cast<Type>(mask)); }
994  template<typename MaskT>
995  __hostdev__ bool isMaskOff(MaskT mask) const { return 0 == (mFlags & static_cast<Type>(mask)); }
996  /// @brief return true if any of the masks in the list are on
997  template<typename MaskT>
998  __hostdev__ bool isMaskOn(std::initializer_list<MaskT> list) const
999  {
1000  for (auto mask : list) {
1001  if (0 != (mFlags & static_cast<Type>(mask))) return true;
1002  }
1003  return false;
1004  }
1005  /// @brief return true if any of the masks in the list are off
1006  template<typename MaskT>
1007  __hostdev__ bool isMaskOff(std::initializer_list<MaskT> list) const
1008  {
1009  for (auto mask : list) {
1010  if (0 == (mFlags & static_cast<Type>(mask))) return true;
1011  }
1012  return false;
1013  }
1014  /// @brief required for backwards compatibility
1015  __hostdev__ BitFlags& operator=(Type n)
1016  {
1017  mFlags = n;
1018  return *this;
1019  }
1020 }; // BitFlags<N>
1021 
1022 // ----------------------------> Mask <--------------------------------------
1023 
1024 /// @brief Bit-mask to encode active states and facilitate sequential iterators
1025 /// and a fast codec for I/O compression.
1026 template<uint32_t LOG2DIM>
1027 class Mask
1028 {
1029 public:
1030  static constexpr uint32_t SIZE = 1U << (3 * LOG2DIM); // Number of bits in mask
1031  static constexpr uint32_t WORD_COUNT = SIZE >> 6; // Number of 64 bit words
1032 
1033  /// @brief Return the memory footprint in bytes of this Mask
1034  __hostdev__ static size_t memUsage() { return sizeof(Mask); }
1035 
1036  /// @brief Return the number of bits available in this Mask
1037  __hostdev__ static uint32_t bitCount() { return SIZE; }
1038 
1039  /// @brief Return the number of machine words used by this Mask
1040  __hostdev__ static uint32_t wordCount() { return WORD_COUNT; }
1041 
1042  /// @brief Return the total number of set bits in this Mask
1043  __hostdev__ uint32_t countOn() const
1044  {
1045  uint32_t sum = 0;
1046  for (const uint64_t *w = mWords, *q = w + WORD_COUNT; w != q; ++w)
1047  sum += util::countOn(*w);
1048  return sum;
1049  }
1050 
1051  /// @brief Return the number of lower set bits in mask up to but excluding the i'th bit
1052  inline __hostdev__ uint32_t countOn(uint32_t i) const
1053  {
1054  uint32_t n = i >> 6, sum = util::countOn(mWords[n] & ((uint64_t(1) << (i & 63u)) - 1u));
1055  for (const uint64_t* w = mWords; n--; ++w)
1056  sum += util::countOn(*w);
1057  return sum;
1058  }
1059 
1060  template<bool On>
1061  class Iterator
1062  {
1063  public:
1065  : mPos(Mask::SIZE)
1066  , mParent(nullptr)
1067  {
1068  }
1069  __hostdev__ Iterator(uint32_t pos, const Mask* parent)
1070  : mPos(pos)
1071  , mParent(parent)
1072  {
1073  }
1074  Iterator& operator=(const Iterator&) = default;
1075  __hostdev__ uint32_t operator*() const { return mPos; }
1076  __hostdev__ uint32_t pos() const { return mPos; }
1077  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1079  {
1080  mPos = mParent->findNext<On>(mPos + 1);
1081  return *this;
1082  }
1084  {
1085  auto tmp = *this;
1086  ++(*this);
1087  return tmp;
1088  }
1089 
1090  private:
1091  uint32_t mPos;
1092  const Mask* mParent;
1093  }; // Member class Iterator
1094 
1096  {
1097  public:
1099  : mPos(pos)
1100  {
1101  }
1102  DenseIterator& operator=(const DenseIterator&) = default;
1103  __hostdev__ uint32_t operator*() const { return mPos; }
1104  __hostdev__ uint32_t pos() const { return mPos; }
1105  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1107  {
1108  ++mPos;
1109  return *this;
1110  }
1112  {
1113  auto tmp = *this;
1114  ++mPos;
1115  return tmp;
1116  }
1117 
1118  private:
1119  uint32_t mPos;
1120  }; // Member class DenseIterator
1121 
1124 
1125  __hostdev__ OnIterator beginOn() const { return OnIterator(this->findFirst<true>(), this); }
1126 
1127  __hostdev__ OffIterator beginOff() const { return OffIterator(this->findFirst<false>(), this); }
1128 
1130 
1131  /// @brief Initialize all bits to zero.
1133  {
1134  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1135  mWords[i] = 0;
1136  }
1138  {
1139  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1140  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1141  mWords[i] = v;
1142  }
1143 
1144  /// @brief Copy constructor
1145  __hostdev__ Mask(const Mask& other)
1146  {
1147  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1148  mWords[i] = other.mWords[i];
1149  }
1150 
1151  /// @brief Return a pointer to the list of words of the bit mask
1152  __hostdev__ uint64_t* words() { return mWords; }
1153  __hostdev__ const uint64_t* words() const { return mWords; }
1154 
1155  template<typename WordT>
1156  __hostdev__ WordT getWord(uint32_t n) const
1157  {
1159  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1160  return reinterpret_cast<WordT*>(mWords)[n];
1161  }
1162  template<typename WordT>
1163  __hostdev__ void setWord(WordT w, uint32_t n)
1164  {
1166  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1167  reinterpret_cast<WordT*>(mWords)[n] = w;
1168  }
1169 
1170  /// @brief Assignment operator that works with openvdb::util::NodeMask
1171  template<typename MaskT = Mask>
1173  {
1174  static_assert(sizeof(Mask) == sizeof(MaskT), "Mismatching sizeof");
1175  static_assert(WORD_COUNT == MaskT::WORD_COUNT, "Mismatching word count");
1176  static_assert(LOG2DIM == MaskT::LOG2DIM, "Mismatching LOG2DIM");
1177  auto* src = reinterpret_cast<const uint64_t*>(&other);
1178  for (uint64_t *dst = mWords, *end = dst + WORD_COUNT; dst != end; ++dst)
1179  *dst = *src++;
1180  return *this;
1181  }
1182 
1183  //__hostdev__ Mask& operator=(const Mask& other){return *util::memcpy(this, &other);}
1184  Mask& operator=(const Mask&) = default;
1185 
1186  __hostdev__ bool operator==(const Mask& other) const
1187  {
1188  for (uint32_t i = 0; i < WORD_COUNT; ++i) {
1189  if (mWords[i] != other.mWords[i])
1190  return false;
1191  }
1192  return true;
1193  }
1194 
1195  __hostdev__ bool operator!=(const Mask& other) const { return !((*this) == other); }
1196 
1197  /// @brief Return true if the given bit is set.
1198  __hostdev__ bool isOn(uint32_t n) const { return 0 != (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1199 
1200  /// @brief Return true if the given bit is NOT set.
1201  __hostdev__ bool isOff(uint32_t n) const { return 0 == (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1202 
1203  /// @brief Return true if all the bits are set in this Mask.
1204  __hostdev__ bool isOn() const
1205  {
1206  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1207  if (mWords[i] != ~uint64_t(0))
1208  return false;
1209  return true;
1210  }
1211 
1212  /// @brief Return true if none of the bits are set in this Mask.
1213  __hostdev__ bool isOff() const
1214  {
1215  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1216  if (mWords[i] != uint64_t(0))
1217  return false;
1218  return true;
1219  }
1220 
1221  /// @brief Set the specified bit on.
1222  __hostdev__ void setOn(uint32_t n) { mWords[n >> 6] |= uint64_t(1) << (n & 63); }
1223  /// @brief Set the specified bit off.
1224  __hostdev__ void setOff(uint32_t n) { mWords[n >> 6] &= ~(uint64_t(1) << (n & 63)); }
1225 
1226 #if defined(__CUDACC__) // the following functions only run on the GPU!
1227  __device__ inline void setOnAtomic(uint32_t n)
1228  {
1229  atomicOr(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), 1ull << (n & 63));
1230  }
1231  __device__ inline void setOffAtomic(uint32_t n)
1232  {
1233  atomicAnd(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), ~(1ull << (n & 63)));
1234  }
1235  __device__ inline void setAtomic(uint32_t n, bool on)
1236  {
1237  on ? this->setOnAtomic(n) : this->setOffAtomic(n);
1238  }
1239 /*
1240  template<typename WordT>
1241  __device__ inline void setWordAtomic(WordT w, uint32_t n)
1242  {
1243  static_assert(util::is_same<WordT, uint8_t, uint16_t, uint32_t, uint64_t>::value);
1244  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1245  if constexpr(util::is_same<WordT,uint8_t>::value) {
1246  mask <<= x;
1247  } else if constexpr(util::is_same<WordT,uint16_t>::value) {
1248  unsigned int mask = w;
1249  if (n >> 1) mask <<= 16;
1250  atomicOr(reinterpret_cast<unsigned int*>(this) + n, mask);
1251  } else if constexpr(util::is_same<WordT,uint32_t>::value) {
1252  atomicOr(reinterpret_cast<unsigned int*>(this) + n, w);
1253  } else {
1254  atomicOr(reinterpret_cast<unsigned long long int*>(this) + n, w);
1255  }
1256  }
1257 */
1258 #endif
1259  /// @brief Set the specified bit on or off.
1260  __hostdev__ void set(uint32_t n, bool on)
1261  {
1262 #if 1 // switch between branchless
1263  auto& word = mWords[n >> 6];
1264  n &= 63;
1265  word &= ~(uint64_t(1) << n);
1266  word |= uint64_t(on) << n;
1267 #else
1268  on ? this->setOn(n) : this->setOff(n);
1269 #endif
1270  }
1271 
1272  /// @brief Set all bits on
1274  {
1275  for (uint32_t i = 0; i < WORD_COUNT; ++i)mWords[i] = ~uint64_t(0);
1276  }
1277 
1278  /// @brief Set all bits off
1280  {
1281  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = uint64_t(0);
1282  }
1283 
1284  /// @brief Set all bits off
1285  __hostdev__ void set(bool on)
1286  {
1287  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1288  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = v;
1289  }
1290  /// brief Toggle the state of all bits in the mask
1292  {
1293  uint32_t n = WORD_COUNT;
1294  for (auto* w = mWords; n--; ++w) *w = ~*w;
1295  }
1296  __hostdev__ void toggle(uint32_t n) { mWords[n >> 6] ^= uint64_t(1) << (n & 63); }
1297 
1298  /// @brief Bitwise intersection
1300  {
1301  uint64_t* w1 = mWords;
1302  const uint64_t* w2 = other.mWords;
1303  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= *w2;
1304  return *this;
1305  }
1306  /// @brief Bitwise union
1308  {
1309  uint64_t* w1 = mWords;
1310  const uint64_t* w2 = other.mWords;
1311  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 |= *w2;
1312  return *this;
1313  }
1314  /// @brief Bitwise difference
1316  {
1317  uint64_t* w1 = mWords;
1318  const uint64_t* w2 = other.mWords;
1319  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= ~*w2;
1320  return *this;
1321  }
1322  /// @brief Bitwise XOR
1324  {
1325  uint64_t* w1 = mWords;
1326  const uint64_t* w2 = other.mWords;
1327  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 ^= *w2;
1328  return *this;
1329  }
1330 
1332  template<bool ON>
1333  __hostdev__ uint32_t findFirst() const
1334  {
1335  uint32_t n = 0u;
1336  const uint64_t* w = mWords;
1337  for (; n < WORD_COUNT && !(ON ? *w : ~*w); ++w, ++n);
1338  return n < WORD_COUNT ? (n << 6) + util::findLowestOn(ON ? *w : ~*w) : SIZE;
1339  }
1340 
1342  template<bool ON>
1343  __hostdev__ uint32_t findNext(uint32_t start) const
1344  {
1345  uint32_t n = start >> 6; // initiate
1346  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1347  uint32_t m = start & 63u;
1348  uint64_t b = ON ? mWords[n] : ~mWords[n];
1349  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1350  b &= ~uint64_t(0u) << m; // mask out lower bits
1351  while (!b && ++n < WORD_COUNT) b = ON ? mWords[n] : ~mWords[n]; // find next non-zero word
1352  return b ? (n << 6) + util::findLowestOn(b) : SIZE; // catch last word=0
1353  }
1354 
1356  template<bool ON>
1357  __hostdev__ uint32_t findPrev(uint32_t start) const
1358  {
1359  uint32_t n = start >> 6; // initiate
1360  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1361  uint32_t m = start & 63u;
1362  uint64_t b = ON ? mWords[n] : ~mWords[n];
1363  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1364  b &= (uint64_t(1u) << m) - 1u; // mask out higher bits
1365  while (!b && n) b = ON ? mWords[--n] : ~mWords[--n]; // find previous non-zero word
1366  return b ? (n << 6) + util::findHighestOn(b) : SIZE; // catch first word=0
1367  }
1368 
1369 private:
1370  uint64_t mWords[WORD_COUNT];
1371 }; // Mask class
1372 
1373 // ----------------------------> Map <--------------------------------------
1374 
1375 /// @brief Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation
1376 struct Map
1377 { // 264B (not 32B aligned!)
1378  float mMatF[9]; // 9*4B <- 3x3 matrix
1379  float mInvMatF[9]; // 9*4B <- 3x3 matrix
1380  float mVecF[3]; // 3*4B <- translation
1381  float mTaperF; // 4B, placeholder for taper value
1382  double mMatD[9]; // 9*8B <- 3x3 matrix
1383  double mInvMatD[9]; // 9*8B <- 3x3 matrix
1384  double mVecD[3]; // 3*8B <- translation
1385  double mTaperD; // 8B, placeholder for taper value
1386 
1387  /// @brief Default constructor for the identity map
1389  : mMatF{ 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1390  , mInvMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1391  , mVecF{0.0f, 0.0f, 0.0f}
1392  , mTaperF{1.0f}
1393  , mMatD{ 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1394  , mInvMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1395  , mVecD{0.0, 0.0, 0.0}
1396  , mTaperD{1.0}
1397  {
1398  }
1399  __hostdev__ Map(double s, const Vec3d& t = Vec3d(0.0, 0.0, 0.0))
1400  : mMatF{float(s), 0.0f, 0.0f, 0.0f, float(s), 0.0f, 0.0f, 0.0f, float(s)}
1401  , mInvMatF{1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s)}
1402  , mVecF{float(t[0]), float(t[1]), float(t[2])}
1403  , mTaperF{1.0f}
1404  , mMatD{s, 0.0, 0.0, 0.0, s, 0.0, 0.0, 0.0, s}
1405  , mInvMatD{1.0 / s, 0.0, 0.0, 0.0, 1.0 / s, 0.0, 0.0, 0.0, 1.0 / s}
1406  , mVecD{t[0], t[1], t[2]}
1407  , mTaperD{1.0}
1408  {
1409  }
1410 
1411  /// @brief Initialize the member data from 3x3 or 4x4 matrices
1412  /// @note This is not _hostdev__ since then MatT=openvdb::Mat4d will produce warnings
1413  template<typename MatT, typename Vec3T>
1414  void set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper = 1.0);
1415 
1416  /// @brief Initialize the member data from 4x4 matrices
1417  /// @note The last (4th) row of invMat is actually ignored.
1418  /// This is not _hostdev__ since then Mat4T=openvdb::Mat4d will produce warnings
1419  template<typename Mat4T>
1420  void set(const Mat4T& mat, const Mat4T& invMat, double taper = 1.0) { this->set(mat, invMat, mat[3], taper); }
1421 
1422  template<typename Vec3T>
1423  void set(double scale, const Vec3T& translation, double taper = 1.0);
1424 
1425  /// @brief Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
1426  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1427  /// @tparam Vec3T Template type of the 3D vector to be mapped
1428  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1429  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1430  template<typename Vec3T>
1431  __hostdev__ Vec3T applyMap(const Vec3T& ijk) const { return math::matMult(mMatD, mVecD, ijk); }
1432 
1433  /// @brief Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
1434  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1435  /// @tparam Vec3T Template type of the 3D vector to be mapped
1436  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1437  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1438  template<typename Vec3T>
1439  __hostdev__ Vec3T applyMapF(const Vec3T& ijk) const { return math::matMult(mMatF, mVecF, ijk); }
1440 
1441  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1442  /// e.g. scale and rotation WITHOUT translation.
1443  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1444  /// @tparam Vec3T Template type of the 3D vector to be mapped
1445  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1446  /// @return linear forward 3x3 mapping of the input vector
1447  template<typename Vec3T>
1448  __hostdev__ Vec3T applyJacobian(const Vec3T& ijk) const { return math::matMult(mMatD, ijk); }
1449 
1450  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1451  /// e.g. scale and rotation WITHOUT translation.
1452  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1453  /// @tparam Vec3T Template type of the 3D vector to be mapped
1454  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1455  /// @return linear forward 3x3 mapping of the input vector
1456  template<typename Vec3T>
1457  __hostdev__ Vec3T applyJacobianF(const Vec3T& ijk) const { return math::matMult(mMatF, ijk); }
1458 
1459  /// @brief Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
1460  /// @note Typically this operation is used for the world -> index mapping
1461  /// @tparam Vec3T Template type of the 3D vector to be mapped
1462  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1463  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1464  template<typename Vec3T>
1465  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const
1466  {
1467  return math::matMult(mInvMatD, Vec3T(xyz[0] - mVecD[0], xyz[1] - mVecD[1], xyz[2] - mVecD[2]));
1468  }
1469 
1470  /// @brief Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
1471  /// @note Typically this operation is used for the world -> index mapping
1472  /// @tparam Vec3T Template type of the 3D vector to be mapped
1473  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1474  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1475  template<typename Vec3T>
1476  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const
1477  {
1478  return math::matMult(mInvMatF, Vec3T(xyz[0] - mVecF[0], xyz[1] - mVecF[1], xyz[2] - mVecF[2]));
1479  }
1480 
1481  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1482  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1483  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1484  /// @tparam Vec3T Template type of the 3D vector to be mapped
1485  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1486  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1487  template<typename Vec3T>
1488  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return math::matMult(mInvMatD, xyz); }
1489 
1490  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1491  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1492  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1493  /// @tparam Vec3T Template type of the 3D vector to be mapped
1494  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1495  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1496  template<typename Vec3T>
1497  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return math::matMult(mInvMatF, xyz); }
1498 
1499  /// @brief Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1500  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1501  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1502  /// @tparam Vec3T Template type of the 3D vector to be mapped
1503  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1504  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1505  template<typename Vec3T>
1506  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return math::matMultT(mInvMatD, xyz); }
1507  template<typename Vec3T>
1508  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return math::matMultT(mInvMatF, xyz); }
1509 
1510  /// @brief Return a voxels size in each coordinate direction, measured at the origin
1511  __hostdev__ Vec3d getVoxelSize() const { return this->applyMap(Vec3d(1)) - this->applyMap(Vec3d(0)); }
1512 }; // Map
1513 
1514 template<typename MatT, typename Vec3T>
1515 inline void Map::set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper)
1516 {
1517  float * mf = mMatF, *vf = mVecF, *mif = mInvMatF;
1518  double *md = mMatD, *vd = mVecD, *mid = mInvMatD;
1519  mTaperF = static_cast<float>(taper);
1520  mTaperD = taper;
1521  for (int i = 0; i < 3; ++i) {
1522  *vd++ = translate[i]; //translation
1523  *vf++ = static_cast<float>(translate[i]); //translation
1524  for (int j = 0; j < 3; ++j) {
1525  *md++ = mat[j][i]; //transposed
1526  *mid++ = invMat[j][i];
1527  *mf++ = static_cast<float>(mat[j][i]); //transposed
1528  *mif++ = static_cast<float>(invMat[j][i]);
1529  }
1530  }
1531 }
1532 
1533 template<typename Vec3T>
1534 inline void Map::set(double dx, const Vec3T& trans, double taper)
1535 {
1536  NANOVDB_ASSERT(dx > 0.0);
1537  const double mat[3][3] = { {dx, 0.0, 0.0}, // row 0
1538  {0.0, dx, 0.0}, // row 1
1539  {0.0, 0.0, dx} }; // row 2
1540  const double idx = 1.0 / dx;
1541  const double invMat[3][3] = { {idx, 0.0, 0.0}, // row 0
1542  {0.0, idx, 0.0}, // row 1
1543  {0.0, 0.0, idx} }; // row 2
1544  this->set(mat, invMat, trans, taper);
1545 }
1546 
1547 // ----------------------------> GridBlindMetaData <--------------------------------------
1548 
1549 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridBlindMetaData
1550 { // 288 bytes
1551  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less!
1552  int64_t mDataOffset; // byte offset to the blind data, relative to GridBlindMetaData::this.
1553  uint64_t mValueCount; // number of blind values, e.g. point count
1554  uint32_t mValueSize;// byte size of each value, e.g. 4 if mDataType=Float and 1 if mDataType=Unknown since that amounts to char
1555  GridBlindDataSemantic mSemantic; // semantic meaning of the data.
1557  GridType mDataType; // 4 bytes
1558  char mName[MaxNameSize]; // note this includes the NULL termination
1559  // no padding required for 32 byte alignment
1560 
1561  /// @brief Empty constructor
1563  : mDataOffset(0)
1564  , mValueCount(0)
1565  , mValueSize(0)
1566  , mSemantic(GridBlindDataSemantic::Unknown)
1567  , mDataClass(GridBlindDataClass::Unknown)
1568  , mDataType(GridType::Unknown)
1569  {
1570  util::memzero(mName, MaxNameSize);
1571  }
1572 
1573  GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
1574  : mDataOffset(dataOffset)
1575  , mValueCount(valueCount)
1576  , mValueSize(valueSize)
1577  , mSemantic(semantic)
1578  , mDataClass(dataClass)
1579  , mDataType(dataType)
1580  {
1581  util::memzero(mName, MaxNameSize);
1582  }
1583 
1584  /// @brief Copy constructor that resets mDataOffset and zeros out mName
1586  : mDataOffset(util::PtrDiff(util::PtrAdd(&other, other.mDataOffset), this))
1587  , mValueCount(other.mValueCount)
1588  , mValueSize(other.mValueSize)
1589  , mSemantic(other.mSemantic)
1590  , mDataClass(other.mDataClass)
1591  , mDataType(other.mDataType)
1592  {
1593  util::strncpy(mName, other.mName, MaxNameSize);
1594  }
1595 
1596  /// @brief Copy assignment operator that resets mDataOffset and copies mName
1597  /// @param rhs right-hand instance to copy
1598  /// @return reference to itself
1600  {
1601  mDataOffset = util::PtrDiff(util::PtrAdd(&rhs, rhs.mDataOffset), this);
1602  mValueCount = rhs.mValueCount;
1603  mValueSize = rhs. mValueSize;
1604  mSemantic = rhs.mSemantic;
1605  mDataClass = rhs.mDataClass;
1606  mDataType = rhs.mDataType;
1607  util::strncpy(mName, rhs.mName, MaxNameSize);
1608  return *this;
1609  }
1610 
1611  __hostdev__ void setBlindData(const void* blindData)
1612  {
1613  mDataOffset = util::PtrDiff(blindData, this);
1614  }
1615 
1616  /// @brief Sets the name string
1617  /// @param name c-string source name
1618  /// @return returns false if @c name has too many characters
1619  __hostdev__ bool setName(const char* name){return util::strncpy(mName, name, MaxNameSize)[MaxNameSize-1] == '\0';}
1620 
1621  /// @brief returns a const void point to the blind data
1622  /// @note assumes that setBlinddData was called
1623  __hostdev__ const void* blindData() const
1624  {
1625  NANOVDB_ASSERT(mDataOffset != 0);
1626  return util::PtrAdd(this, mDataOffset);
1627  }
1628 
1629  /// @brief Get a const pointer to the blind data represented by this meta data
1630  /// @tparam BlindDataT Expected value type of the blind data.
1631  /// @return Returns NULL if mGridType!=toGridType<BlindDataT>(), else a const point of type BlindDataT.
1632  /// @note Use mDataType=Unknown if BlindDataT is a custom data type unknown to NanoVDB.
1633  template<typename BlindDataT>
1634  __hostdev__ const BlindDataT* getBlindData() const
1635  {
1636  return mDataOffset && (mDataType == toGridType<BlindDataT>()) ? util::PtrAdd<BlindDataT>(this, mDataOffset) : nullptr;
1637  }
1638 
1639  /// @brief return true if this meta data has a valid combination of semantic, class and value tags
1640  /// @note this does not check if the mDataOffset has been set!
1641  __hostdev__ bool isValid() const
1642  {
1643  auto check = [&]()->bool{
1644  switch (mDataType){
1645  case GridType::Unknown: return mValueSize==1u;// i.e. we encode data as mValueCount chars
1646  case GridType::Float: return mValueSize==4u;
1647  case GridType::Double: return mValueSize==8u;
1648  case GridType::Int16: return mValueSize==2u;
1649  case GridType::Int32: return mValueSize==4u;
1650  case GridType::Int64: return mValueSize==8u;
1651  case GridType::Vec3f: return mValueSize==12u;
1652  case GridType::Vec3d: return mValueSize==24u;
1653  case GridType::Half: return mValueSize==2u;
1654  case GridType::RGBA8: return mValueSize==4u;
1655  case GridType::Fp8: return mValueSize==1u;
1656  case GridType::Fp16: return mValueSize==2u;
1657  case GridType::Vec4f: return mValueSize==16u;
1658  case GridType::Vec4d: return mValueSize==32u;
1659  case GridType::Vec3u8: return mValueSize==3u;
1660  case GridType::Vec3u16: return mValueSize==6u;
1661  default: return true;}// all other combinations are valid
1662  };
1663  return nanovdb::isValid(mDataClass, mSemantic, mDataType) && check();
1664  }
1665 
1666  /// @brief return size in bytes of the blind data represented by this blind meta data
1667  /// @note This size includes possible padding for 32 byte alignment. The actual amount
1668  /// of bind data is mValueCount * mValueSize
1669  __hostdev__ uint64_t blindDataSize() const
1670  {
1671  return math::AlignUp<NANOVDB_DATA_ALIGNMENT>(mValueCount * mValueSize);
1672  }
1673 }; // GridBlindMetaData
1674 
1675 // ----------------------------> NodeTrait <--------------------------------------
1676 
1677 /// @brief Struct to derive node type from its level in a given
1678 /// grid, tree or root while preserving constness
1679 template<typename GridOrTreeOrRootT, int LEVEL>
1680 struct NodeTrait;
1681 
1682 // Partial template specialization of above Node struct
1683 template<typename GridOrTreeOrRootT>
1684 struct NodeTrait<GridOrTreeOrRootT, 0>
1685 {
1686  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1687  using Type = typename GridOrTreeOrRootT::LeafNodeType;
1688  using type = typename GridOrTreeOrRootT::LeafNodeType;
1689 };
1690 template<typename GridOrTreeOrRootT>
1691 struct NodeTrait<const GridOrTreeOrRootT, 0>
1692 {
1693  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1694  using Type = const typename GridOrTreeOrRootT::LeafNodeType;
1695  using type = const typename GridOrTreeOrRootT::LeafNodeType;
1696 };
1697 
1698 template<typename GridOrTreeOrRootT>
1699 struct NodeTrait<GridOrTreeOrRootT, 1>
1700 {
1701  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1702  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1703  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1704 };
1705 template<typename GridOrTreeOrRootT>
1706 struct NodeTrait<const GridOrTreeOrRootT, 1>
1707 {
1708  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1709  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1710  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1711 };
1712 template<typename GridOrTreeOrRootT>
1713 struct NodeTrait<GridOrTreeOrRootT, 2>
1714 {
1715  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1716  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1717  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1718 };
1719 template<typename GridOrTreeOrRootT>
1720 struct NodeTrait<const GridOrTreeOrRootT, 2>
1721 {
1722  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1723  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1724  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1725 };
1726 template<typename GridOrTreeOrRootT>
1727 struct NodeTrait<GridOrTreeOrRootT, 3>
1728 {
1729  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1730  using Type = typename GridOrTreeOrRootT::RootNodeType;
1731  using type = typename GridOrTreeOrRootT::RootNodeType;
1732 };
1733 
1734 template<typename GridOrTreeOrRootT>
1735 struct NodeTrait<const GridOrTreeOrRootT, 3>
1736 {
1737  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1738  using Type = const typename GridOrTreeOrRootT::RootNodeType;
1739  using type = const typename GridOrTreeOrRootT::RootNodeType;
1740 };
1741 
1742 // ------------> Froward decelerations of accelerated random access methods <---------------
1743 
1744 template<typename BuildT>
1745 struct GetValue;
1746 template<typename BuildT>
1747 struct SetValue;
1748 template<typename BuildT>
1749 struct SetVoxel;
1750 template<typename BuildT>
1751 struct GetState;
1752 template<typename BuildT>
1753 struct GetDim;
1754 template<typename BuildT>
1755 struct GetLeaf;
1756 template<typename BuildT>
1757 struct ProbeValue;
1758 template<typename BuildT>
1760 
1761 // ----------------------------> CheckMode <----------------------------------
1762 
1763 /// @brief List of different modes for computing for a checksum
1764 enum class CheckMode : uint32_t { Disable = 0, // no computation
1765  Empty = 0,
1766  Half = 1,
1767  Partial = 1, // fast but approximate
1768  Default = 1, // defaults to Partial
1769  Full = 2, // slow but accurate
1770  End = 3, // marks the end of the enum list
1771  StrLen = 9 + End};
1772 
1773 /// @brief Prints CheckMode enum to a c-string
1774 /// @param dst Destination c-string
1775 /// @param mode CheckMode enum to be converted to string
1776 /// @return destinations string @c dst
1777 __hostdev__ inline char* toStr(char *dst, CheckMode mode)
1778 {
1779  switch (mode){
1780  case CheckMode::Half: return util::strcpy(dst, "half");
1781  case CheckMode::Full: return util::strcpy(dst, "full");
1782  default: return util::strcpy(dst, "disabled");// StrLen = 8 + 1 + End
1783  }
1784 }
1785 
1786 // ----------------------------> Checksum <----------------------------------
1787 
1788 /// @brief Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data
1789 /// and one for the remaining grid nodes.
1791 {
1792  /// Three types of checksums:
1793  /// 1) Empty: all 64 bits are on (used to signify a disabled or undefined checksum)
1794  /// 2) Half: Upper 32 bits are on and not all of lower 32 bits are on (lower 32 bits checksum head of grid)
1795  /// 3) Full: Not all of the 64 bits are one (lower 32 bits checksum head of grid and upper 32 bits checksum tail of grid)
1796  union { uint32_t mCRC32[2]; uint64_t mCRC64; };// mCRC32[0] is checksum of Grid, Tree and Root, and mCRC32[1] is checksum of nodes
1797 
1798 public:
1799 
1800  static constexpr uint32_t EMPTY32 = ~uint32_t{0};
1801  static constexpr uint64_t EMPTY64 = ~uint64_t(0);
1802 
1803  /// @brief default constructor initiates checksum to EMPTY
1804  __hostdev__ Checksum() : mCRC64{EMPTY64} {}
1805 
1806  /// @brief Constructor that allows the two 32bit checksums to be initiated explicitly
1807  /// @param head Initial 32bit CRC checksum of grid, tree and root data
1808  /// @param tail Initial 32bit CRC checksum of all the nodes and blind data
1809  __hostdev__ Checksum(uint32_t head, uint32_t tail) : mCRC32{head, tail} {}
1810 
1811  /// @brief
1812  /// @param checksum
1813  /// @param mode
1814  __hostdev__ Checksum(uint64_t checksum, CheckMode mode = CheckMode::Full) : mCRC64{mode == CheckMode::Disable ? EMPTY64 : checksum}
1815  {
1816  if (mode == CheckMode::Partial) mCRC32[1] = EMPTY32;
1817  }
1818 
1819  /// @brief return the 64 bit checksum of this instance
1820  [[deprecated("Use Checksum::data instead.")]]
1821  __hostdev__ uint64_t checksum() const { return mCRC64; }
1822  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1823  __hostdev__ uint32_t& checksum(int i) {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1824  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1825  __hostdev__ uint32_t checksum(int i) const {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1826 
1827  __hostdev__ uint64_t full() const { return mCRC64; }
1828  __hostdev__ uint64_t& full() { return mCRC64; }
1829  __hostdev__ uint32_t head() const { return mCRC32[0]; }
1830  __hostdev__ uint32_t& head() { return mCRC32[0]; }
1831  __hostdev__ uint32_t tail() const { return mCRC32[1]; }
1832  __hostdev__ uint32_t& tail() { return mCRC32[1]; }
1833 
1834  /// @brief return true if the 64 bit checksum is partial, i.e. of head only
1835  [[deprecated("Use Checksum::isHalf instead.")]]
1836  __hostdev__ bool isPartial() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1837  __hostdev__ bool isHalf() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1838 
1839  /// @brief return true if the 64 bit checksum is fill, i.e. of both had and nodes
1840  __hostdev__ bool isFull() const { return mCRC64 != EMPTY64 && mCRC32[1] != EMPTY32; }
1841 
1842  /// @brief return true if the 64 bit checksum is disables (unset)
1843  __hostdev__ bool isEmpty() const { return mCRC64 == EMPTY64; }
1844 
1845  __hostdev__ void disable() { mCRC64 = EMPTY64; }
1846 
1847  /// @brief return the mode of the 64 bit checksum
1849  {
1850  return mCRC64 == EMPTY64 ? CheckMode::Disable :
1851  mCRC32[1] == EMPTY32 ? CheckMode::Partial : CheckMode::Full;
1852  }
1853 
1854  /// @brief return true if the checksums are identical
1855  /// @param rhs other Checksum
1856  __hostdev__ bool operator==(const Checksum &rhs) const {return mCRC64 == rhs.mCRC64;}
1857 
1858  /// @brief return true if the checksums are not identical
1859  /// @param rhs other Checksum
1860  __hostdev__ bool operator!=(const Checksum &rhs) const {return mCRC64 != rhs.mCRC64;}
1861 };// Checksum
1862 
1863 /// @brief Maps 64 bit checksum to CheckMode enum
1864 /// @param checksum 64 bit checksum with two CRC32 codes
1865 /// @return CheckMode enum
1866 __hostdev__ inline CheckMode toCheckMode(const Checksum &checksum){return checksum.mode();}
1867 
1868 // ----------------------------> Grid <--------------------------------------
1869 
1870 /*
1871  The following class and comment is for internal use only
1872 
1873  Memory layout:
1874 
1875  Grid -> 39 x double (world bbox and affine transformation)
1876  Tree -> Root 3 x ValueType + int32_t + N x Tiles (background,min,max,tileCount + tileCount x Tiles)
1877 
1878  N2 upper InternalNodes each with 2 bit masks, N2 tiles, and min/max values
1879 
1880  N1 lower InternalNodes each with 2 bit masks, N1 tiles, and min/max values
1881 
1882  N0 LeafNodes each with a bit mask, N0 ValueTypes and min/max
1883 
1884  Example layout: ("---" implies it has a custom offset, "..." implies zero or more)
1885  [GridData][TreeData]---[RootData][ROOT TILES...]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
1886 */
1887 
1888 /// @brief Struct with all the member data of the Grid (useful during serialization of an openvdb grid)
1889 ///
1890 /// @note The transform is assumed to be affine (so linear) and have uniform scale! So frustum transforms
1891 /// and non-uniform scaling are not supported (primarily because they complicate ray-tracing in index space)
1892 ///
1893 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
1894 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridData
1895 { // sizeof(GridData) = 672B
1896  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less
1897  uint64_t mMagic; // 8B (0) magic to validate it is valid grid data.
1898  Checksum mChecksum; // 8B (8). Checksum of grid buffer.
1899  Version mVersion; // 4B (16) major, minor, and patch version numbers
1900  BitFlags<32> mFlags; // 4B (20). flags for grid.
1901  uint32_t mGridIndex; // 4B (24). Index of this grid in the buffer
1902  uint32_t mGridCount; // 4B (28). Total number of grids in the buffer
1903  uint64_t mGridSize; // 8B (32). byte count of this entire grid occupied in the buffer.
1904  char mGridName[MaxNameSize]; // 256B (40)
1905  Map mMap; // 264B (296). affine transformation between index and world space in both single and double precision
1906  Vec3dBBox mWorldBBox; // 48B (560). floating-point AABB of active values in WORLD SPACE (2 x 3 doubles)
1907  Vec3d mVoxelSize; // 24B (608). size of a voxel in world units
1908  GridClass mGridClass; // 4B (632).
1909  GridType mGridType; // 4B (636).
1910  int64_t mBlindMetadataOffset; // 8B (640). offset to beginning of GridBlindMetaData structures that follow this grid.
1911  uint32_t mBlindMetadataCount; // 4B (648). count of GridBlindMetaData structures that follow this grid.
1912  uint32_t mData0; // 4B (652) unused
1913  uint64_t mData1; // 8B (656) is use for the total number of values indexed by an IndexGrid
1914  uint64_t mData2; // 8B (664) padding to 32 B alignment
1915  /// @brief Use this method to initiate most member data
1916  GridData& operator=(const GridData&) = default;
1917  //__hostdev__ GridData& operator=(const GridData& other){return *util::memcpy(this, &other);}
1918  __hostdev__ void init(std::initializer_list<GridFlags> list = {GridFlags::IsBreadthFirst},
1919  uint64_t gridSize = 0u,
1920  const Map& map = Map(),
1921  GridType gridType = GridType::Unknown,
1922  GridClass gridClass = GridClass::Unknown)
1923  {
1924 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1925  mMagic = NANOVDB_MAGIC_GRID;
1926 #else
1927  mMagic = NANOVDB_MAGIC_NUMB;
1928 #endif
1929  mChecksum.disable();// all 64 bits ON means checksum is disabled
1930  mVersion = Version();
1931  mFlags.initMask(list);
1932  mGridIndex = 0u;
1933  mGridCount = 1u;
1934  mGridSize = gridSize;
1935  mGridName[0] = '\0';
1936  mMap = map;
1937  mWorldBBox = Vec3dBBox();// invalid bbox
1938  mVoxelSize = map.getVoxelSize();
1939  mGridClass = gridClass;
1940  mGridType = gridType;
1941  mBlindMetadataOffset = mGridSize; // i.e. no blind data
1942  mBlindMetadataCount = 0u; // i.e. no blind data
1943  mData0 = 0u; // zero padding
1944  mData1 = 0u; // only used for index and point grids
1945 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1946  mData2 = 0u;// unused
1947 #else
1948  mData2 = NANOVDB_MAGIC_GRID; // since version 32.6.0 (will change in the future)
1949 #endif
1950  }
1951  /// @brief return true if the magic number and the version are both valid
1952  __hostdev__ bool isValid() const {
1953  // Before v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and mData2 was undefined
1954  // For v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and toMagic(mData2) = MagicType::NanoGrid
1955  // After v32.7.X: toMagic(mMagic) = MagicType::NanoGrid and mData2 will again be undefined
1956  const MagicType magic = toMagic(mMagic);
1957  if (magic == MagicType::NanoGrid || toMagic(mData2) == MagicType::NanoGrid) return true;
1958  bool test = magic == MagicType::NanoVDB;// could be GridData or io::FileHeader
1959  if (test) test = mVersion.isCompatible();
1960  if (test) test = mGridCount > 0u && mGridIndex < mGridCount;
1961  if (test) test = mGridClass < GridClass::End && mGridType < GridType::End;
1962  return test;
1963  }
1964  // Set and unset various bit flags
1965  __hostdev__ void setMinMaxOn(bool on = true) { mFlags.setMask(GridFlags::HasMinMax, on); }
1966  __hostdev__ void setBBoxOn(bool on = true) { mFlags.setMask(GridFlags::HasBBox, on); }
1967  __hostdev__ void setLongGridNameOn(bool on = true) { mFlags.setMask(GridFlags::HasLongGridName, on); }
1968  __hostdev__ void setAverageOn(bool on = true) { mFlags.setMask(GridFlags::HasAverage, on); }
1969  __hostdev__ void setStdDeviationOn(bool on = true) { mFlags.setMask(GridFlags::HasStdDeviation, on); }
1970  __hostdev__ bool setGridName(const char* src)
1971  {
1972  const bool success = (util::strncpy(mGridName, src, MaxNameSize)[MaxNameSize-1] == '\0');
1973  if (!success) mGridName[MaxNameSize-1] = '\0';
1974  return success; // returns true if input grid name is NOT longer than MaxNameSize characters
1975  }
1976  // Affine transformations based on double precision
1977  template<typename Vec3T>
1978  __hostdev__ Vec3T applyMap(const Vec3T& xyz) const { return mMap.applyMap(xyz); } // Pos: index -> world
1979  template<typename Vec3T>
1980  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const { return mMap.applyInverseMap(xyz); } // Pos: world -> index
1981  template<typename Vec3T>
1982  __hostdev__ Vec3T applyJacobian(const Vec3T& xyz) const { return mMap.applyJacobian(xyz); } // Dir: index -> world
1983  template<typename Vec3T>
1984  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return mMap.applyInverseJacobian(xyz); } // Dir: world -> index
1985  template<typename Vec3T>
1986  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return mMap.applyIJT(xyz); }
1987  // Affine transformations based on single precision
1988  template<typename Vec3T>
1989  __hostdev__ Vec3T applyMapF(const Vec3T& xyz) const { return mMap.applyMapF(xyz); } // Pos: index -> world
1990  template<typename Vec3T>
1991  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const { return mMap.applyInverseMapF(xyz); } // Pos: world -> index
1992  template<typename Vec3T>
1993  __hostdev__ Vec3T applyJacobianF(const Vec3T& xyz) const { return mMap.applyJacobianF(xyz); } // Dir: index -> world
1994  template<typename Vec3T>
1995  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return mMap.applyInverseJacobianF(xyz); } // Dir: world -> index
1996  template<typename Vec3T>
1997  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return mMap.applyIJTF(xyz); }
1998 
1999  // @brief Return a non-const void pointer to the tree
2000  __hostdev__ void* treePtr() { return this + 1; }// TreeData is always right after GridData
2001 
2002  // @brief Return a const void pointer to the tree
2003  __hostdev__ const void* treePtr() const { return this + 1; }// TreeData is always right after GridData
2004 
2005  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2006  /// @tparam LEVEL Level of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2007  template <uint32_t LEVEL>
2008  __hostdev__ const void* nodePtr() const
2009  {
2010  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2011  const void *treeData = this + 1;// TreeData is always right after GridData
2012  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2013  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2014  }
2015 
2016  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2017  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2018  /// @warning If not nodes exist at @c LEVEL NULL is returned
2019  template <uint32_t LEVEL>
2021  {
2022  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2023  void *treeData = this + 1;// TreeData is always right after GridData
2024  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2025  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2026  }
2027 
2028  /// @brief Return number of nodes at @c LEVEL
2029  /// @tparam Level of the node. LEVEL 0 means leaf node and LEVEL 2 means upper node
2030  template <uint32_t LEVEL>
2031  __hostdev__ uint32_t nodeCount() const
2032  {
2033  static_assert(LEVEL >= 0 && LEVEL < 3, "invalid LEVEL template parameter");
2034  return *util::PtrAdd<uint32_t>(this + 1, 4*(8 + LEVEL));// TreeData is always right after GridData
2035  }
2036 
2037  /// @brief Returns a const reference to the blindMetaData at the specified linear offset.
2038  ///
2039  /// @warning The linear offset is assumed to be in the valid range
2041  {
2042  NANOVDB_ASSERT(n < mBlindMetadataCount);
2043  return util::PtrAdd<GridBlindMetaData>(this, mBlindMetadataOffset) + n;
2044  }
2045 
2046  __hostdev__ const char* gridName() const
2047  {
2048  if (mFlags.isMaskOn(GridFlags::HasLongGridName)) {// search for first blind meta data that contains a name
2049  NANOVDB_ASSERT(mBlindMetadataCount > 0);
2050  for (uint32_t i = 0; i < mBlindMetadataCount; ++i) {
2051  const auto* metaData = this->blindMetaData(i);// EXTREMELY important to be a pointer
2052  if (metaData->mDataClass == GridBlindDataClass::GridName) {
2053  NANOVDB_ASSERT(metaData->mDataType == GridType::Unknown);
2054  return metaData->template getBlindData<const char>();
2055  }
2056  }
2057  NANOVDB_ASSERT(false); // should never hit this!
2058  }
2059  return mGridName;
2060  }
2061 
2062  /// @brief Return memory usage in bytes for this class only.
2063  __hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2064 
2065  /// @brief return AABB of active values in world space
2066  __hostdev__ const Vec3dBBox& worldBBox() const { return mWorldBBox; }
2067 
2068  /// @brief return AABB of active values in index space
2069  __hostdev__ const CoordBBox& indexBBox() const {return *(const CoordBBox*)(this->nodePtr<3>());}
2070 
2071  /// @brief return the root table has size
2072  __hostdev__ uint32_t rootTableSize() const
2073  {
2074  const void *root = this->nodePtr<3>();
2075  return root ? *util::PtrAdd<uint32_t>(root, sizeof(CoordBBox)) : 0u;
2076  }
2077 
2078  /// @brief test if the grid is empty, e.i the root table has size 0
2079  /// @return true if this grid contains not data whatsoever
2080  __hostdev__ bool isEmpty() const {return this->rootTableSize() == 0u;}
2081 
2082  /// @brief return true if RootData follows TreeData in memory without any extra padding
2083  /// @details TreeData is always following right after GridData, but the same might not be true for RootData
2084  __hostdev__ bool isRootConnected() const { return *(const uint64_t*)((const char*)(this + 1) + 24) == 64u;}
2085 }; // GridData
2086 
2087 // Forward declaration of accelerated random access class
2088 template<typename BuildT, int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1>
2090 
2091 template<typename BuildT>
2093 
2094 /// @brief Highest level of the data structure. Contains a tree and a world->index
2095 /// transform (that currently only supports uniform scaling and translation).
2096 ///
2097 /// @note This the API of this class to interface with client code
2098 template<typename TreeT>
2099 class Grid : public GridData
2100 {
2101 public:
2102  using TreeType = TreeT;
2103  using RootType = typename TreeT::RootType;
2105  using UpperNodeType = typename RootNodeType::ChildNodeType;
2106  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2107  using LeafNodeType = typename RootType::LeafNodeType;
2109  using ValueType = typename TreeT::ValueType;
2110  using BuildType = typename TreeT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2111  using CoordType = typename TreeT::CoordType;
2113 
2114  /// @brief Disallow constructions, copy and assignment
2115  ///
2116  /// @note Only a Serializer, defined elsewhere, can instantiate this class
2117  Grid(const Grid&) = delete;
2118  Grid& operator=(const Grid&) = delete;
2119  ~Grid() = delete;
2120 
2121  __hostdev__ Version version() const { return DataType::mVersion; }
2122 
2123  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2124 
2125  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2126 
2127  /// @brief Return memory usage in bytes for this class only.
2128  //__hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2129 
2130  /// @brief Return the memory footprint of the entire grid, i.e. including all nodes and blind data
2131  __hostdev__ uint64_t gridSize() const { return DataType::mGridSize; }
2132 
2133  /// @brief Return index of this grid in the buffer
2134  __hostdev__ uint32_t gridIndex() const { return DataType::mGridIndex; }
2135 
2136  /// @brief Return total number of grids in the buffer
2137  __hostdev__ uint32_t gridCount() const { return DataType::mGridCount; }
2138 
2139  /// @brief @brief Return the total number of values indexed by this IndexGrid
2140  ///
2141  /// @note This method is only defined for IndexGrid = NanoGrid<ValueIndex || ValueOnIndex || ValueIndexMask || ValueOnIndexMask>
2142  template<typename T = BuildType>
2143  __hostdev__ typename util::enable_if<BuildTraits<T>::is_index, const uint64_t&>::type
2144  valueCount() const { return DataType::mData1; }
2145 
2146  /// @brief @brief Return the total number of points indexed by this PointGrid
2147  ///
2148  /// @note This method is only defined for PointGrid = NanoGrid<Point>
2149  template<typename T = BuildType>
2150  __hostdev__ typename util::enable_if<util::is_same<T, Point>::value, const uint64_t&>::type
2151  pointCount() const { return DataType::mData1; }
2152 
2153  /// @brief Return a const reference to the tree
2154  __hostdev__ const TreeT& tree() const { return *reinterpret_cast<const TreeT*>(this->treePtr()); }
2155 
2156  /// @brief Return a non-const reference to the tree
2157  __hostdev__ TreeT& tree() { return *reinterpret_cast<TreeT*>(this->treePtr()); }
2158 
2159  /// @brief Return a new instance of a ReadAccessor used to access values in this grid
2160  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->tree().root()); }
2161 
2162  /// @brief Return a const reference to the size of a voxel in world units
2163  __hostdev__ const Vec3d& voxelSize() const { return DataType::mVoxelSize; }
2164 
2165  /// @brief Return a const reference to the Map for this grid
2166  __hostdev__ const Map& map() const { return DataType::mMap; }
2167 
2168  /// @brief world to index space transformation
2169  template<typename Vec3T>
2170  __hostdev__ Vec3T worldToIndex(const Vec3T& xyz) const { return this->applyInverseMap(xyz); }
2171 
2172  /// @brief index to world space transformation
2173  template<typename Vec3T>
2174  __hostdev__ Vec3T indexToWorld(const Vec3T& xyz) const { return this->applyMap(xyz); }
2175 
2176  /// @brief transformation from index space direction to world space direction
2177  /// @warning assumes dir to be normalized
2178  template<typename Vec3T>
2179  __hostdev__ Vec3T indexToWorldDir(const Vec3T& dir) const { return this->applyJacobian(dir); }
2180 
2181  /// @brief transformation from world space direction to index space direction
2182  /// @warning assumes dir to be normalized
2183  template<typename Vec3T>
2184  __hostdev__ Vec3T worldToIndexDir(const Vec3T& dir) const { return this->applyInverseJacobian(dir); }
2185 
2186  /// @brief transform the gradient from index space to world space.
2187  /// @details Applies the inverse jacobian transform map.
2188  template<typename Vec3T>
2189  __hostdev__ Vec3T indexToWorldGrad(const Vec3T& grad) const { return this->applyIJT(grad); }
2190 
2191  /// @brief world to index space transformation
2192  template<typename Vec3T>
2193  __hostdev__ Vec3T worldToIndexF(const Vec3T& xyz) const { return this->applyInverseMapF(xyz); }
2194 
2195  /// @brief index to world space transformation
2196  template<typename Vec3T>
2197  __hostdev__ Vec3T indexToWorldF(const Vec3T& xyz) const { return this->applyMapF(xyz); }
2198 
2199  /// @brief transformation from index space direction to world space direction
2200  /// @warning assumes dir to be normalized
2201  template<typename Vec3T>
2202  __hostdev__ Vec3T indexToWorldDirF(const Vec3T& dir) const { return this->applyJacobianF(dir); }
2203 
2204  /// @brief transformation from world space direction to index space direction
2205  /// @warning assumes dir to be normalized
2206  template<typename Vec3T>
2207  __hostdev__ Vec3T worldToIndexDirF(const Vec3T& dir) const { return this->applyInverseJacobianF(dir); }
2208 
2209  /// @brief Transforms the gradient from index space to world space.
2210  /// @details Applies the inverse jacobian transform map.
2211  template<typename Vec3T>
2212  __hostdev__ Vec3T indexToWorldGradF(const Vec3T& grad) const { return DataType::applyIJTF(grad); }
2213 
2214  /// @brief Computes a AABB of active values in world space
2215  //__hostdev__ const Vec3dBBox& worldBBox() const { return DataType::mWorldBBox; }
2216 
2217  /// @brief Computes a AABB of active values in index space
2218  ///
2219  /// @note This method is returning a floating point bounding box and not a CoordBBox. This makes
2220  /// it more useful for clipping rays.
2221  //__hostdev__ const BBox<CoordType>& indexBBox() const { return this->tree().bbox(); }
2222 
2223  /// @brief Return the total number of active voxels in this tree.
2224  __hostdev__ uint64_t activeVoxelCount() const { return this->tree().activeVoxelCount(); }
2225 
2226  /// @brief Methods related to the classification of this grid
2227  __hostdev__ bool isValid() const { return DataType::isValid(); }
2228  __hostdev__ const GridType& gridType() const { return DataType::mGridType; }
2229  __hostdev__ const GridClass& gridClass() const { return DataType::mGridClass; }
2230  __hostdev__ bool isLevelSet() const { return DataType::mGridClass == GridClass::LevelSet; }
2231  __hostdev__ bool isFogVolume() const { return DataType::mGridClass == GridClass::FogVolume; }
2232  __hostdev__ bool isStaggered() const { return DataType::mGridClass == GridClass::Staggered; }
2233  __hostdev__ bool isPointIndex() const { return DataType::mGridClass == GridClass::PointIndex; }
2234  __hostdev__ bool isGridIndex() const { return DataType::mGridClass == GridClass::IndexGrid; }
2235  __hostdev__ bool isPointData() const { return DataType::mGridClass == GridClass::PointData; }
2236  __hostdev__ bool isMask() const { return DataType::mGridClass == GridClass::Topology; }
2237  __hostdev__ bool isUnknown() const { return DataType::mGridClass == GridClass::Unknown; }
2238  __hostdev__ bool hasMinMax() const { return DataType::mFlags.isMaskOn(GridFlags::HasMinMax); }
2239  __hostdev__ bool hasBBox() const { return DataType::mFlags.isMaskOn(GridFlags::HasBBox); }
2240  __hostdev__ bool hasLongGridName() const { return DataType::mFlags.isMaskOn(GridFlags::HasLongGridName); }
2241  __hostdev__ bool hasAverage() const { return DataType::mFlags.isMaskOn(GridFlags::HasAverage); }
2242  __hostdev__ bool hasStdDeviation() const { return DataType::mFlags.isMaskOn(GridFlags::HasStdDeviation); }
2243  __hostdev__ bool isBreadthFirst() const { return DataType::mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
2244 
2245  /// @brief return true if the specified node type is laid out breadth-first in memory and has a fixed size.
2246  /// This allows for sequential access to the nodes.
2247  template<typename NodeT>
2248  __hostdev__ bool isSequential() const { return NodeT::FIXED_SIZE && this->isBreadthFirst(); }
2249 
2250  /// @brief return true if the specified node level is laid out breadth-first in memory and has a fixed size.
2251  /// This allows for sequential access to the nodes.
2252  template<int LEVEL>
2253  __hostdev__ bool isSequential() const { return NodeTrait<TreeT, LEVEL>::type::FIXED_SIZE && this->isBreadthFirst(); }
2254 
2255  /// @brief return true if nodes at all levels can safely be accessed with simple linear offsets
2256  __hostdev__ bool isSequential() const { return UpperNodeType::FIXED_SIZE && LowerNodeType::FIXED_SIZE && LeafNodeType::FIXED_SIZE && this->isBreadthFirst(); }
2257 
2258  /// @brief Return a c-string with the name of this grid
2259  __hostdev__ const char* gridName() const { return DataType::gridName(); }
2260 
2261  /// @brief Return a c-string with the name of this grid, truncated to 255 characters
2262  __hostdev__ const char* shortGridName() const { return DataType::mGridName; }
2263 
2264  /// @brief Return checksum of the grid buffer.
2265  __hostdev__ const Checksum& checksum() const { return DataType::mChecksum; }
2266 
2267  /// @brief Return true if this grid is empty, i.e. contains no values or nodes.
2268  //__hostdev__ bool isEmpty() const { return this->tree().isEmpty(); }
2269 
2270  /// @brief Return the count of blind-data encoded in this grid
2271  __hostdev__ uint32_t blindDataCount() const { return DataType::mBlindMetadataCount; }
2272 
2273  /// @brief Return the index of the first blind data with specified name if found, otherwise -1.
2274  __hostdev__ int findBlindData(const char* name) const;
2275 
2276  /// @brief Return the index of the first blind data with specified semantic if found, otherwise -1.
2277  __hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const;
2278 
2279  /// @brief Returns a const pointer to the blindData at the specified linear offset.
2280  ///
2281  /// @warning Pointer might be NULL and the linear offset is assumed to be in the valid range
2282  // this method is deprecated !!!!
2283  [[deprecated("Use Grid::getBlindData<T>() instead.")]]
2284  __hostdev__ const void* blindData(uint32_t n) const
2285  {
2286  printf("\nnanovdb::Grid::blindData is unsafe and hence deprecated! Please use nanovdb::Grid::getBlindData instead.\n\n");
2287  NANOVDB_ASSERT(n < DataType::mBlindMetadataCount);
2288  return this->blindMetaData(n).blindData();
2289  }
2290 
2291  template <typename BlindDataT>
2292  __hostdev__ const BlindDataT* getBlindData(uint32_t n) const
2293  {
2294  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2295  return this->blindMetaData(n).template getBlindData<BlindDataT>();// NULL if mismatching BlindDataT
2296  }
2297 
2298  template <typename BlindDataT>
2299  __hostdev__ BlindDataT* getBlindData(uint32_t n)
2300  {
2301  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2302  return const_cast<BlindDataT*>(this->blindMetaData(n).template getBlindData<BlindDataT>());// NULL if mismatching BlindDataT
2303  }
2304 
2305  __hostdev__ const GridBlindMetaData& blindMetaData(uint32_t n) const { return *DataType::blindMetaData(n); }
2306 
2307 private:
2308  static_assert(sizeof(GridData) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(GridData) is misaligned");
2309 }; // Class Grid
2310 
2311 template<typename TreeT>
2313 {
2314  for (uint32_t i = 0, n = this->blindDataCount(); i < n; ++i) {
2315  if (this->blindMetaData(i).mSemantic == semantic)
2316  return int(i);
2317  }
2318  return -1;
2319 }
2320 
2321 template<typename TreeT>
2322 __hostdev__ int Grid<TreeT>::findBlindData(const char* name) const
2323 {
2324  auto test = [&](int n) {
2325  const char* str = this->blindMetaData(n).mName;
2326  for (int i = 0; i < GridBlindMetaData::MaxNameSize; ++i) {
2327  if (name[i] != str[i])
2328  return false;
2329  if (name[i] == '\0' && str[i] == '\0')
2330  return true;
2331  }
2332  return true; // all len characters matched
2333  };
2334  for (int i = 0, n = this->blindDataCount(); i < n; ++i)
2335  if (test(i))
2336  return i;
2337  return -1;
2338 }
2339 
2340 // ----------------------------> Tree <--------------------------------------
2341 
2342 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) TreeData
2343 { // sizeof(TreeData) == 64B
2344  int64_t mNodeOffset[4];// 32B, byte offset from this tree to first leaf, lower, upper and root node. If mNodeCount[N]=0 => mNodeOffset[N]==mNodeOffset[N+1]
2345  uint32_t mNodeCount[3]; // 12B, total number of nodes of type: leaf, lower internal, upper internal
2346  uint32_t mTileCount[3]; // 12B, total number of active tile values at the lower internal, upper internal and root node levels
2347  uint64_t mVoxelCount; // 8B, total number of active voxels in the root and all its child nodes.
2348  // No padding since it's always 32B aligned
2349  TreeData& operator=(const TreeData&) = default;
2350  __hostdev__ void setRoot(const void* root) {
2351  NANOVDB_ASSERT(root);
2352  mNodeOffset[3] = util::PtrDiff(root, this);
2353  }
2354 
2355  /// @brief Get a non-const void pointer to the root node (never NULL)
2356  __hostdev__ void* getRoot() { return util::PtrAdd(this, mNodeOffset[3]); }
2357 
2358  /// @brief Get a const void pointer to the root node (never NULL)
2359  __hostdev__ const void* getRoot() const { return util::PtrAdd(this, mNodeOffset[3]); }
2360 
2361  template<typename NodeT>
2362  __hostdev__ void setFirstNode(const NodeT* node) {mNodeOffset[NodeT::LEVEL] = (node ? util::PtrDiff(node, this) : 0);}
2363 
2364  /// @brief Return true if the root is empty, i.e. has not child nodes or constant tiles
2365  __hostdev__ bool isEmpty() const {return mNodeOffset[3] ? *util::PtrAdd<uint32_t>(this, mNodeOffset[3] + sizeof(CoordBBox)) == 0 : true;}
2366 
2367  /// @brief Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2368  __hostdev__ CoordBBox bbox() const {return mNodeOffset[3] ? *util::PtrAdd<CoordBBox>(this, mNodeOffset[3]) : CoordBBox();}
2369 
2370  /// @brief return true if RootData is layout out immediately after TreeData in memory
2371  __hostdev__ bool isRootNext() const {return mNodeOffset[3] ? mNodeOffset[3] == sizeof(TreeData) : false; }
2372 };// TreeData
2373 
2374 // ----------------------------> GridTree <--------------------------------------
2375 
2376 /// @brief defines a tree type from a grid type while preserving constness
2377 template<typename GridT>
2378 struct GridTree
2379 {
2380  using Type = typename GridT::TreeType;
2381  using type = typename GridT::TreeType;
2382 };
2383 template<typename GridT>
2384 struct GridTree<const GridT>
2385 {
2386  using Type = const typename GridT::TreeType;
2387  using type = const typename GridT::TreeType;
2388 };
2389 
2390 // ----------------------------> Tree <--------------------------------------
2391 
2392 /// @brief VDB Tree, which is a thin wrapper around a RootNode.
2393 template<typename RootT>
2394 class Tree : public TreeData
2395 {
2396  static_assert(RootT::LEVEL == 3, "Tree depth is not supported");
2397  static_assert(RootT::ChildNodeType::LOG2DIM == 5, "Tree configuration is not supported");
2398  static_assert(RootT::ChildNodeType::ChildNodeType::LOG2DIM == 4, "Tree configuration is not supported");
2399  static_assert(RootT::LeafNodeType::LOG2DIM == 3, "Tree configuration is not supported");
2400 
2401 public:
2403  using RootType = RootT;
2404  using RootNodeType = RootT;
2405  using UpperNodeType = typename RootNodeType::ChildNodeType;
2406  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2407  using LeafNodeType = typename RootType::LeafNodeType;
2408  using ValueType = typename RootT::ValueType;
2409  using BuildType = typename RootT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2410  using CoordType = typename RootT::CoordType;
2412 
2413  using Node3 = RootT;
2414  using Node2 = typename RootT::ChildNodeType;
2415  using Node1 = typename Node2::ChildNodeType;
2417 
2418  /// @brief This class cannot be constructed or deleted
2419  Tree() = delete;
2420  Tree(const Tree&) = delete;
2421  Tree& operator=(const Tree&) = delete;
2422  ~Tree() = delete;
2423 
2424  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2425 
2426  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2427 
2428  /// @brief return memory usage in bytes for the class
2429  __hostdev__ static uint64_t memUsage() { return sizeof(DataType); }
2430 
2431  __hostdev__ RootT& root() {return *reinterpret_cast<RootT*>(DataType::getRoot());}
2432 
2433  __hostdev__ const RootT& root() const {return *reinterpret_cast<const RootT*>(DataType::getRoot());}
2434 
2435  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->root()); }
2436 
2437  /// @brief Return the value of the given voxel (regardless of state or location in the tree.)
2438  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->root().getValue(ijk); }
2439  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->root().getValue(CoordType(i, j, k)); }
2440 
2441  /// @brief Return the active state of the given voxel (regardless of state or location in the tree.)
2442  __hostdev__ bool isActive(const CoordType& ijk) const { return this->root().isActive(ijk); }
2443 
2444  /// @brief Return true if this tree is empty, i.e. contains no values or nodes
2445  //__hostdev__ bool isEmpty() const { return this->root().isEmpty(); }
2446 
2447  /// @brief Combines the previous two methods in a single call
2448  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->root().probeValue(ijk, v); }
2449 
2450  /// @brief Return a const reference to the background value.
2451  __hostdev__ const ValueType& background() const { return this->root().background(); }
2452 
2453  /// @brief Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree
2454  __hostdev__ void extrema(ValueType& min, ValueType& max) const;
2455 
2456  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2457  //__hostdev__ const BBox<CoordType>& bbox() const { return this->root().bbox(); }
2458 
2459  /// @brief Return the total number of active voxels in this tree.
2460  __hostdev__ uint64_t activeVoxelCount() const { return DataType::mVoxelCount; }
2461 
2462  /// @brief Return the total number of active tiles at the specified level of the tree.
2463  ///
2464  /// @details level = 1,2,3 corresponds to active tile count in lower internal nodes, upper
2465  /// internal nodes, and the root level. Note active values at the leaf level are
2466  /// referred to as active voxels (see activeVoxelCount defined above).
2467  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const
2468  {
2469  NANOVDB_ASSERT(level > 0 && level <= 3); // 1, 2, or 3
2470  return DataType::mTileCount[level - 1];
2471  }
2472 
2473  template<typename NodeT>
2474  __hostdev__ uint32_t nodeCount() const
2475  {
2476  static_assert(NodeT::LEVEL < 3, "Invalid NodeT");
2477  return DataType::mNodeCount[NodeT::LEVEL];
2478  }
2479 
2480  __hostdev__ uint32_t nodeCount(int level) const
2481  {
2482  NANOVDB_ASSERT(level < 3);
2483  return DataType::mNodeCount[level];
2484  }
2485 
2486  __hostdev__ uint32_t totalNodeCount() const
2487  {
2488  return DataType::mNodeCount[0] + DataType::mNodeCount[1] + DataType::mNodeCount[2];
2489  }
2490 
2491  /// @brief return a pointer to the first node of the specified type
2492  ///
2493  /// @warning Note it may return NULL if no nodes exist
2494  template<typename NodeT>
2496  {
2497  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2498  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2499  }
2500 
2501  /// @brief return a const pointer to the first node of the specified type
2502  ///
2503  /// @warning Note it may return NULL if no nodes exist
2504  template<typename NodeT>
2505  __hostdev__ const NodeT* getFirstNode() const
2506  {
2507  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2508  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2509  }
2510 
2511  /// @brief return a pointer to the first node at the specified level
2512  ///
2513  /// @warning Note it may return NULL if no nodes exist
2514  template<int LEVEL>
2516  {
2517  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2518  }
2519 
2520  /// @brief return a const pointer to the first node of the specified level
2521  ///
2522  /// @warning Note it may return NULL if no nodes exist
2523  template<int LEVEL>
2525  {
2526  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2527  }
2528 
2529  /// @brief Template specializations of getFirstNode
2530  __hostdev__ LeafNodeType* getFirstLeaf() { return this->getFirstNode<LeafNodeType>(); }
2531  __hostdev__ const LeafNodeType* getFirstLeaf() const { return this->getFirstNode<LeafNodeType>(); }
2532  __hostdev__ typename NodeTrait<RootT, 1>::type* getFirstLower() { return this->getFirstNode<1>(); }
2533  __hostdev__ const typename NodeTrait<RootT, 1>::type* getFirstLower() const { return this->getFirstNode<1>(); }
2534  __hostdev__ typename NodeTrait<RootT, 2>::type* getFirstUpper() { return this->getFirstNode<2>(); }
2535  __hostdev__ const typename NodeTrait<RootT, 2>::type* getFirstUpper() const { return this->getFirstNode<2>(); }
2536 
2537  template<typename OpT, typename... ArgsT>
2538  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
2539  {
2540  return this->root().template get<OpT>(ijk, args...);
2541  }
2542 
2543  template<typename OpT, typename... ArgsT>
2544  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
2545  {
2546  return this->root().template set<OpT>(ijk, args...);
2547  }
2548 
2549 private:
2550  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(TreeData) is misaligned");
2551 
2552 }; // Tree class
2553 
2554 template<typename RootT>
2556 {
2557  min = this->root().minimum();
2558  max = this->root().maximum();
2559 }
2560 
2561 // --------------------------> RootData <------------------------------------
2562 
2563 /// @brief Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode)
2564 ///
2565 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
2566 template<typename ChildT>
2567 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) RootData
2568 {
2569  using ValueT = typename ChildT::ValueType;
2570  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2571  using CoordT = typename ChildT::CoordType;
2572  using StatsT = typename ChildT::FloatType;
2573  static constexpr bool FIXED_SIZE = false;
2574 
2575  /// @brief Return a key based on the coordinates of a voxel
2576 #ifdef NANOVDB_USE_SINGLE_ROOT_KEY
2577  using KeyT = uint64_t;
2578  template<typename CoordType>
2579  __hostdev__ static KeyT CoordToKey(const CoordType& ijk)
2580  {
2581  static_assert(sizeof(CoordT) == sizeof(CoordType), "Mismatching sizeof");
2582  static_assert(32 - ChildT::TOTAL <= 21, "Cannot use 64 bit root keys");
2583  return (KeyT(uint32_t(ijk[2]) >> ChildT::TOTAL)) | // z is the lower 21 bits
2584  (KeyT(uint32_t(ijk[1]) >> ChildT::TOTAL) << 21) | // y is the middle 21 bits
2585  (KeyT(uint32_t(ijk[0]) >> ChildT::TOTAL) << 42); // x is the upper 21 bits
2586  }
2587  __hostdev__ static CoordT KeyToCoord(const KeyT& key)
2588  {
2589  static constexpr uint64_t MASK = (1u << 21) - 1; // used to mask out 21 lower bits
2590  return CoordT(((key >> 42) & MASK) << ChildT::TOTAL, // x are the upper 21 bits
2591  ((key >> 21) & MASK) << ChildT::TOTAL, // y are the middle 21 bits
2592  (key & MASK) << ChildT::TOTAL); // z are the lower 21 bits
2593  }
2594 #else
2595  using KeyT = CoordT;
2596  __hostdev__ static KeyT CoordToKey(const CoordT& ijk) { return ijk & ~ChildT::MASK; }
2597  __hostdev__ static CoordT KeyToCoord(const KeyT& key) { return key; }
2598 #endif
2599  math::BBox<CoordT> mBBox; // 24B. AABB of active values in index space.
2600  uint32_t mTableSize; // 4B. number of tiles and child pointers in the root node
2601 
2602  ValueT mBackground; // background value, i.e. value of any unset voxel
2603  ValueT mMinimum; // typically 4B, minimum of all the active values
2604  ValueT mMaximum; // typically 4B, maximum of all the active values
2605  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
2606  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
2607 
2608  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
2609  ///
2610  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
2611  __hostdev__ static constexpr uint32_t padding()
2612  {
2613  return sizeof(RootData) - (24 + 4 + 3 * sizeof(ValueT) + 2 * sizeof(StatsT));
2614  }
2615 
2616  struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) Tile
2617  {
2618  template<typename CoordType>
2619  __hostdev__ void setChild(const CoordType& k, const void* ptr, const RootData* data)
2620  {
2621  key = CoordToKey(k);
2622  state = false;
2623  child = util::PtrDiff(ptr, data);
2624  }
2625  template<typename CoordType, typename ValueType>
2626  __hostdev__ void setValue(const CoordType& k, bool s, const ValueType& v)
2627  {
2628  key = CoordToKey(k);
2629  state = s;
2630  value = v;
2631  child = 0;
2632  }
2633  __hostdev__ bool isChild() const { return child != 0; }
2634  __hostdev__ bool isValue() const { return child == 0; }
2635  __hostdev__ bool isActive() const { return child == 0 && state; }
2636  __hostdev__ CoordT origin() const { return KeyToCoord(key); }
2637  KeyT key; // NANOVDB_USE_SINGLE_ROOT_KEY ? 8B : 12B
2638  int64_t child; // 8B. signed byte offset from this node to the child node. 0 means it is a constant tile, so use value.
2639  uint32_t state; // 4B. state of tile value
2640  ValueT value; // value of tile (i.e. no child node)
2641  }; // Tile
2642 
2643  /// @brief Returns a pointer to the tile at the specified linear offset.
2644  ///
2645  /// @warning The linear offset is assumed to be in the valid range
2646  __hostdev__ const Tile* tile(uint32_t n) const
2647  {
2648  NANOVDB_ASSERT(n < mTableSize);
2649  return reinterpret_cast<const Tile*>(this + 1) + n;
2650  }
2651  __hostdev__ Tile* tile(uint32_t n)
2652  {
2653  NANOVDB_ASSERT(n < mTableSize);
2654  return reinterpret_cast<Tile*>(this + 1) + n;
2655  }
2656 
2657  template<typename DataT>
2658  class TileIter
2659  {
2660  protected:
2663  TileT *mBegin, *mPos, *mEnd;
2664 
2665  public:
2666  __hostdev__ TileIter() : mBegin(nullptr), mPos(nullptr), mEnd(nullptr) {}
2667  __hostdev__ TileIter(DataT* data, uint32_t pos = 0)
2668  : mBegin(reinterpret_cast<TileT*>(data + 1))// tiles reside right after the RootData
2669  , mPos(mBegin + pos)
2670  , mEnd(mBegin + data->mTableSize)
2671  {
2672  NANOVDB_ASSERT(data);
2673  NANOVDB_ASSERT(mBegin <= mPos);// pos > mTableSize is allowed
2674  NANOVDB_ASSERT(mBegin <= mEnd);// mTableSize = 0 is possible
2675  }
2676  __hostdev__ inline operator bool() const { return mPos < mEnd; }
2677  __hostdev__ inline auto pos() const {return mPos - mBegin; }
2679  {
2680  ++mPos;
2681  return *this;
2682  }
2683  __hostdev__ inline TileT& operator*() const
2684  {
2685  NANOVDB_ASSERT(mPos < mEnd);
2686  return *mPos;
2687  }
2688  __hostdev__ inline TileT* operator->() const
2689  {
2690  NANOVDB_ASSERT(mPos < mEnd);
2691  return mPos;
2692  }
2693  __hostdev__ inline DataT* data() const
2694  {
2695  NANOVDB_ASSERT(mBegin);
2696  return reinterpret_cast<DataT*>(mBegin) - 1;
2697  }
2698  __hostdev__ inline bool isChild() const
2699  {
2700  NANOVDB_ASSERT(mPos < mEnd);
2701  return mPos->child != 0;
2702  }
2703  __hostdev__ inline bool isValue() const
2704  {
2705  NANOVDB_ASSERT(mPos < mEnd);
2706  return mPos->child == 0;
2707  }
2708  __hostdev__ inline bool isValueOn() const
2709  {
2710  NANOVDB_ASSERT(mPos < mEnd);
2711  return mPos->child == 0 && mPos->state != 0;
2712  }
2713  __hostdev__ inline NodeT* child() const
2714  {
2715  NANOVDB_ASSERT(mPos < mEnd && mPos->child != 0);
2716  return util::PtrAdd<NodeT>(this->data(), mPos->child);// byte offset relative to RootData::this
2717  }
2718  __hostdev__ inline ValueT value() const
2719  {
2720  NANOVDB_ASSERT(mPos < mEnd && mPos->child == 0);
2721  return mPos->value;
2722  }
2723  };// TileIter
2724 
2727 
2730 
2732  {
2733  const auto key = CoordToKey(ijk);
2734  TileIterator iter(this);
2735  for(; iter; ++iter) if (iter->key == key) break;
2736  return iter;
2737  }
2738 
2739  __hostdev__ inline ConstTileIterator probe(const CoordT& ijk) const
2740  {
2741  const auto key = CoordToKey(ijk);
2742  ConstTileIterator iter(this);
2743  for(; iter; ++iter) if (iter->key == key) break;
2744  return iter;
2745  }
2746 
2747  __hostdev__ inline Tile* probeTile(const CoordT& ijk)
2748  {
2749  auto iter = this->probe(ijk);
2750  return iter ? iter.operator->() : nullptr;
2751  }
2752 
2753  __hostdev__ inline const Tile* probeTile(const CoordT& ijk) const
2754  {
2755  return const_cast<RootData*>(this)->probeTile(ijk);
2756  }
2757 
2758  __hostdev__ inline ChildT* probeChild(const CoordT& ijk)
2759  {
2760  auto iter = this->probe(ijk);
2761  return iter && iter.isChild() ? iter.child() : nullptr;
2762  }
2763 
2764  __hostdev__ inline const ChildT* probeChild(const CoordT& ijk) const
2765  {
2766  return const_cast<RootData*>(this)->probeChild(ijk);
2767  }
2768 
2769  /// @brief Returns a const reference to the child node in the specified tile.
2770  ///
2771  /// @warning A child node is assumed to exist in the specified tile
2772  __hostdev__ ChildT* getChild(const Tile* tile)
2773  {
2774  NANOVDB_ASSERT(tile->child);
2775  return util::PtrAdd<ChildT>(this, tile->child);
2776  }
2777  __hostdev__ const ChildT* getChild(const Tile* tile) const
2778  {
2779  NANOVDB_ASSERT(tile->child);
2780  return util::PtrAdd<ChildT>(this, tile->child);
2781  }
2782 
2783  __hostdev__ const ValueT& getMin() const { return mMinimum; }
2784  __hostdev__ const ValueT& getMax() const { return mMaximum; }
2785  __hostdev__ const StatsT& average() const { return mAverage; }
2786  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
2787 
2788  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
2789  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
2790  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
2791  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
2792 
2793  /// @brief This class cannot be constructed or deleted
2794  RootData() = delete;
2795  RootData(const RootData&) = delete;
2796  RootData& operator=(const RootData&) = delete;
2797  ~RootData() = delete;
2798 }; // RootData
2799 
2800 // --------------------------> RootNode <------------------------------------
2801 
2802 /// @brief Top-most node of the VDB tree structure.
2803 template<typename ChildT>
2804 class RootNode : public RootData<ChildT>
2805 {
2806 public:
2808  using ChildNodeType = ChildT;
2809  using RootType = RootNode<ChildT>; // this allows RootNode to behave like a Tree
2811  using UpperNodeType = ChildT;
2812  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2813  using LeafNodeType = typename ChildT::LeafNodeType;
2814  using ValueType = typename DataType::ValueT;
2815  using FloatType = typename DataType::StatsT;
2816  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2817 
2818  using CoordType = typename ChildT::CoordType;
2819  using BBoxType = math::BBox<CoordType>;
2821  using Tile = typename DataType::Tile;
2822  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
2823 
2824  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
2825 
2826  template<typename RootT>
2827  class BaseIter
2828  {
2829  protected:
2832  typename DataType::template TileIter<DataT> mTileIter;
2833  __hostdev__ BaseIter() : mTileIter() {}
2834  __hostdev__ BaseIter(DataT* data) : mTileIter(data){}
2835 
2836  public:
2837  __hostdev__ operator bool() const { return bool(mTileIter); }
2838  __hostdev__ uint32_t pos() const { return uint32_t(mTileIter.pos()); }
2839  __hostdev__ TileT* tile() const { return mTileIter.operator->(); }
2840  __hostdev__ CoordType getOrigin() const {return mTileIter->origin();}
2841  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
2842  }; // Member class BaseIter
2843 
2844  template<typename RootT>
2845  class ChildIter : public BaseIter<RootT>
2846  {
2847  static_assert(util::is_same<typename util::remove_const<RootT>::type, RootNode>::value, "Invalid RootT");
2848  using BaseT = BaseIter<RootT>;
2849  using NodeT = typename util::match_const<ChildT, RootT>::type;
2850  using BaseT::mTileIter;
2851 
2852  public:
2854  __hostdev__ ChildIter(RootT* parent) : BaseT(parent->data())
2855  {
2856  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2857  }
2858  __hostdev__ NodeT& operator*() const {return *mTileIter.child();}
2859  __hostdev__ NodeT* operator->() const {return mTileIter.child();}
2861  {
2862  ++mTileIter;
2863  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2864  return *this;
2865  }
2867  {
2868  auto tmp = *this;
2869  this->operator++();
2870  return tmp;
2871  }
2872  }; // Member class ChildIter
2873 
2876 
2879 
2880  template<typename RootT>
2881  class ValueIter : public BaseIter<RootT>
2882  {
2883  using BaseT = BaseIter<RootT>;
2884  using BaseT::mTileIter;
2885 
2886  public:
2888  __hostdev__ ValueIter(RootT* parent) : BaseT(parent->data())
2889  {
2890  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2891  }
2892  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2893  __hostdev__ bool isActive() const {return mTileIter.isValueOn();}
2895  {
2896  ++mTileIter;
2897  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2898  return *this;
2899  }
2901  {
2902  auto tmp = *this;
2903  this->operator++();
2904  return tmp;
2905  }
2906  }; // Member class ValueIter
2907 
2910 
2913 
2914  template<typename RootT>
2915  class ValueOnIter : public BaseIter<RootT>
2916  {
2917  using BaseT = BaseIter<RootT>;
2918  using BaseT::mTileIter;
2919 
2920  public:
2922  __hostdev__ ValueOnIter(RootT* parent) : BaseT(parent->data())
2923  {
2924  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2925  }
2926  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2928  {
2929  ++mTileIter;
2930  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2931  return *this;
2932  }
2934  {
2935  auto tmp = *this;
2936  this->operator++();
2937  return tmp;
2938  }
2939  }; // Member class ValueOnIter
2940 
2943 
2946 
2947  template<typename RootT>
2948  class DenseIter : public BaseIter<RootT>
2949  {
2950  using BaseT = BaseIter<RootT>;
2951  using NodeT = typename util::match_const<ChildT, RootT>::type;
2952  using BaseT::mTileIter;
2953 
2954  public:
2956  __hostdev__ DenseIter(RootT* parent) : BaseT(parent->data()){}
2957  __hostdev__ NodeT* probeChild(ValueType& value) const
2958  {
2959  if (mTileIter.isChild()) return mTileIter.child();
2960  value = mTileIter.value();
2961  return nullptr;
2962  }
2963  __hostdev__ bool isValueOn() const{return mTileIter.isValueOn();}
2965  {
2966  ++mTileIter;
2967  return *this;
2968  }
2970  {
2971  auto tmp = *this;
2972  ++mTileIter;
2973  return tmp;
2974  }
2975  }; // Member class DenseIter
2976 
2979 
2983 
2984  /// @brief This class cannot be constructed or deleted
2985  RootNode() = delete;
2986  RootNode(const RootNode&) = delete;
2987  RootNode& operator=(const RootNode&) = delete;
2988  ~RootNode() = delete;
2989 
2991 
2992  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2993 
2994  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2995 
2996  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2997  __hostdev__ const BBoxType& bbox() const { return DataType::mBBox; }
2998 
2999  /// @brief Return the total number of active voxels in the root and all its child nodes.
3000 
3001  /// @brief Return a const reference to the background value, i.e. the value associated with
3002  /// any coordinate location that has not been set explicitly.
3003  __hostdev__ const ValueType& background() const { return DataType::mBackground; }
3004 
3005  /// @brief Return the number of tiles encoded in this root node
3006  __hostdev__ const uint32_t& tileCount() const { return DataType::mTableSize; }
3007  __hostdev__ const uint32_t& getTableSize() const { return DataType::mTableSize; }
3008 
3009  /// @brief Return a const reference to the minimum active value encoded in this root node and any of its child nodes
3010  __hostdev__ const ValueType& minimum() const { return DataType::mMinimum; }
3011 
3012  /// @brief Return a const reference to the maximum active value encoded in this root node and any of its child nodes
3013  __hostdev__ const ValueType& maximum() const { return DataType::mMaximum; }
3014 
3015  /// @brief Return a const reference to the average of all the active values encoded in this root node and any of its child nodes
3016  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3017 
3018  /// @brief Return the variance of all the active values encoded in this root node and any of its child nodes
3019  __hostdev__ FloatType variance() const { return math::Pow2(DataType::mStdDevi); }
3020 
3021  /// @brief Return a const reference to the standard deviation of all the active values encoded in this root node and any of its child nodes
3022  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3023 
3024  /// @brief Return the expected memory footprint in bytes with the specified number of tiles
3025  __hostdev__ static uint64_t memUsage(uint32_t tableSize) { return sizeof(RootNode) + tableSize * sizeof(Tile); }
3026 
3027  /// @brief Return the actual memory footprint of this root node
3028  __hostdev__ uint64_t memUsage() const { return sizeof(RootNode) + DataType::mTableSize * sizeof(Tile); }
3029 
3030  /// @brief Return true if this RootNode is empty, i.e. contains no values or nodes
3031  __hostdev__ bool isEmpty() const { return DataType::mTableSize == uint32_t(0); }
3032 
3033  /// @brief Return the value of the given voxel
3034  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3035  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildType>>(CoordType(i, j, k)); }
3036  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3037  /// @brief return the state and updates the value of the specified voxel
3038  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3039  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3040 
3041  template<typename OpT, typename... ArgsT>
3042  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3043  {
3044  if (const Tile* tile = this->probeTile(ijk)) {
3045  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template get<OpT>(ijk, args...);
3046  return OpT::get(*tile, args...);
3047  }
3048  return OpT::get(*this, args...);
3049  }
3050 
3051  template<typename OpT, typename... ArgsT>
3052  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3053  {
3054  if (Tile* tile = DataType::probeTile(ijk)) {
3055  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template set<OpT>(ijk, args...);
3056  return OpT::set(*tile, args...);
3057  }
3058  return OpT::set(*this, args...);
3059  }
3060 
3061 private:
3062  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData) is misaligned");
3063  static_assert(sizeof(typename DataType::Tile) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData::Tile) is misaligned");
3064 
3065  template<typename, int, int, int>
3066  friend class ReadAccessor;
3067 
3068  template<typename>
3069  friend class Tree;
3070 
3071  template<typename RayT, typename AccT>
3072  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3073  {
3074  if (const Tile* tile = this->probeTile(ijk)) {
3075  if (tile->isChild()) {
3076  const auto* child = this->getChild(tile);
3077  acc.insert(ijk, child);
3078  return child->getDimAndCache(ijk, ray, acc);
3079  }
3080  return 1 << ChildT::TOTAL; //tile value
3081  }
3082  return ChildNodeType::dim(); // background
3083  }
3084 
3085  template<typename OpT, typename AccT, typename... ArgsT>
3086  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3087  {
3088  if (const Tile* tile = this->probeTile(ijk)) {
3089  if constexpr(OpT::LEVEL < LEVEL) {
3090  if (tile->isChild()) {
3091  const ChildT* child = this->getChild(tile);
3092  acc.insert(ijk, child);
3093  return child->template getAndCache<OpT>(ijk, acc, args...);
3094  }
3095  }
3096  return OpT::get(*tile, args...);
3097  }
3098  return OpT::get(*this, args...);
3099  }
3100 
3101  template<typename OpT, typename AccT, typename... ArgsT>
3102  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3103  {
3104  if (Tile* tile = DataType::probeTile(ijk)) {
3105  if constexpr(OpT::LEVEL < LEVEL) {
3106  if (tile->isChild()) {
3107  ChildT* child = this->getChild(tile);
3108  acc.insert(ijk, child);
3109  return child->template setAndCache<OpT>(ijk, acc, args...);
3110  }
3111  }
3112  return OpT::set(*tile, args...);
3113  }
3114  return OpT::set(*this, args...);
3115  }
3116 
3117 }; // RootNode class
3118 
3119 // After the RootNode the memory layout is assumed to be the sorted Tiles
3120 
3121 // --------------------------> InternalNode <------------------------------------
3122 
3123 /// @brief Struct with all the member data of the InternalNode (useful during serialization of an openvdb InternalNode)
3124 ///
3125 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3126 template<typename ChildT, uint32_t LOG2DIM>
3127 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) InternalData
3128 {
3129  using ValueT = typename ChildT::ValueType;
3130  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3131  using StatsT = typename ChildT::FloatType;
3132  using CoordT = typename ChildT::CoordType;
3133  using MaskT = typename ChildT::template MaskType<LOG2DIM>;
3134  static constexpr bool FIXED_SIZE = true;
3135 
3136  union Tile
3137  {
3139  int64_t child; //signed 64 bit byte offset relative to this InternalData, i.e. child-pointer = Tile::child + this
3140  /// @brief This class cannot be constructed or deleted
3141  Tile() = delete;
3142  Tile(const Tile&) = delete;
3143  Tile& operator=(const Tile&) = delete;
3144  ~Tile() = delete;
3145  };
3146 
3147  math::BBox<CoordT> mBBox; // 24B. node bounding box. |
3148  uint64_t mFlags; // 8B. node flags. | 32B aligned
3149  MaskT mValueMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3150  MaskT mChildMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3151 
3152  ValueT mMinimum; // typically 4B
3153  ValueT mMaximum; // typically 4B
3154  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
3155  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3156  // possible padding, e.g. 28 byte padding when ValueType = bool
3157 
3158  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3159  ///
3160  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3161  __hostdev__ static constexpr uint32_t padding()
3162  {
3163  return sizeof(InternalData) - (24u + 8u + 2 * (sizeof(MaskT) + sizeof(ValueT) + sizeof(StatsT)) + (1u << (3 * LOG2DIM)) * (sizeof(ValueT) > 8u ? sizeof(ValueT) : 8u));
3164  }
3165  alignas(32) Tile mTable[1u << (3 * LOG2DIM)]; // sizeof(ValueT) x (16*16*16 or 32*32*32)
3166 
3167  __hostdev__ static uint64_t memUsage() { return sizeof(InternalData); }
3168 
3169  __hostdev__ void setChild(uint32_t n, const void* ptr)
3170  {
3171  NANOVDB_ASSERT(mChildMask.isOn(n));
3172  mTable[n].child = util::PtrDiff(ptr, this);
3173  }
3174 
3175  template<typename ValueT>
3176  __hostdev__ void setValue(uint32_t n, const ValueT& v)
3177  {
3178  NANOVDB_ASSERT(!mChildMask.isOn(n));
3179  mTable[n].value = v;
3180  }
3181 
3182  /// @brief Returns a pointer to the child node at the specifed linear offset.
3183  __hostdev__ ChildT* getChild(uint32_t n)
3184  {
3185  NANOVDB_ASSERT(mChildMask.isOn(n));
3186  return util::PtrAdd<ChildT>(this, mTable[n].child);
3187  }
3188  __hostdev__ const ChildT* getChild(uint32_t n) const
3189  {
3190  NANOVDB_ASSERT(mChildMask.isOn(n));
3191  return util::PtrAdd<ChildT>(this, mTable[n].child);
3192  }
3193 
3194  __hostdev__ ValueT getValue(uint32_t n) const
3195  {
3196  NANOVDB_ASSERT(mChildMask.isOff(n));
3197  return mTable[n].value;
3198  }
3199 
3200  __hostdev__ bool isActive(uint32_t n) const
3201  {
3202  NANOVDB_ASSERT(mChildMask.isOff(n));
3203  return mValueMask.isOn(n);
3204  }
3205 
3206  __hostdev__ bool isChild(uint32_t n) const { return mChildMask.isOn(n); }
3207 
3208  template<typename T>
3209  __hostdev__ void setOrigin(const T& ijk) { mBBox[0] = ijk; }
3210 
3211  __hostdev__ const ValueT& getMin() const { return mMinimum; }
3212  __hostdev__ const ValueT& getMax() const { return mMaximum; }
3213  __hostdev__ const StatsT& average() const { return mAverage; }
3214  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
3215 
3216 // GCC 13 (and possibly prior versions) has a regression that results in invalid
3217 // warnings when -Wstringop-overflow is turned on. For details, refer to
3218 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3219 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106757
3220 #if defined(__GNUC__) && (__GNUC__ < 14) && !defined(__APPLE__) && !defined(__llvm__)
3221 #pragma GCC diagnostic push
3222 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3223 #endif
3224  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
3225  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
3226  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
3227  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
3228 #if defined(__GNUC__) && (__GNUC__ < 14) && !defined(__APPLE__) && !defined(__llvm__)
3229 #pragma GCC diagnostic pop
3230 #endif
3231 
3232  /// @brief This class cannot be constructed or deleted
3233  InternalData() = delete;
3234  InternalData(const InternalData&) = delete;
3235  InternalData& operator=(const InternalData&) = delete;
3236  ~InternalData() = delete;
3237 }; // InternalData
3238 
3239 /// @brief Internal nodes of a VDB tree
3240 template<typename ChildT, uint32_t Log2Dim = ChildT::LOG2DIM + 1>
3241 class InternalNode : public InternalData<ChildT, Log2Dim>
3242 {
3243 public:
3245  using ValueType = typename DataType::ValueT;
3246  using FloatType = typename DataType::StatsT;
3247  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3248  using LeafNodeType = typename ChildT::LeafNodeType;
3249  using ChildNodeType = ChildT;
3250  using CoordType = typename ChildT::CoordType;
3251  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
3252  template<uint32_t LOG2>
3253  using MaskType = typename ChildT::template MaskType<LOG2>;
3254  template<bool On>
3255  using MaskIterT = typename Mask<Log2Dim>::template Iterator<On>;
3256 
3257  static constexpr uint32_t LOG2DIM = Log2Dim;
3258  static constexpr uint32_t TOTAL = LOG2DIM + ChildT::TOTAL; // dimension in index space
3259  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
3260  static constexpr uint32_t SIZE = 1u << (3 * LOG2DIM); // number of tile values (or child pointers)
3261  static constexpr uint32_t MASK = (1u << TOTAL) - 1u;
3262  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
3263  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
3264 
3265  /// @brief Visits child nodes of this node only
3266  template <typename ParentT>
3267  class ChildIter : public MaskIterT<true>
3268  {
3269  static_assert(util::is_same<typename util::remove_const<ParentT>::type, InternalNode>::value, "Invalid ParentT");
3270  using BaseT = MaskIterT<true>;
3271  using NodeT = typename util::match_const<ChildT, ParentT>::type;
3272  ParentT* mParent;
3273 
3274  public:
3276  : BaseT()
3277  , mParent(nullptr)
3278  {
3279  }
3280  __hostdev__ ChildIter(ParentT* parent)
3281  : BaseT(parent->mChildMask.beginOn())
3282  , mParent(parent)
3283  {
3284  }
3285  ChildIter& operator=(const ChildIter&) = default;
3286  __hostdev__ NodeT& operator*() const
3287  {
3288  NANOVDB_ASSERT(*this);
3289  return *mParent->getChild(BaseT::pos());
3290  }
3291  __hostdev__ NodeT* operator->() const
3292  {
3293  NANOVDB_ASSERT(*this);
3294  return mParent->getChild(BaseT::pos());
3295  }
3297  {
3298  NANOVDB_ASSERT(*this);
3299  return (*this)->origin();
3300  }
3301  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3302  }; // Member class ChildIter
3303 
3306 
3309 
3310  /// @brief Visits all tile values in this node, i.e. both inactive and active tiles
3311  class ValueIterator : public MaskIterT<false>
3312  {
3313  using BaseT = MaskIterT<false>;
3314  const InternalNode* mParent;
3315 
3316  public:
3318  : BaseT()
3319  , mParent(nullptr)
3320  {
3321  }
3323  : BaseT(parent->data()->mChildMask.beginOff())
3324  , mParent(parent)
3325  {
3326  }
3327  ValueIterator& operator=(const ValueIterator&) = default;
3329  {
3330  NANOVDB_ASSERT(*this);
3331  return mParent->data()->getValue(BaseT::pos());
3332  }
3334  {
3335  NANOVDB_ASSERT(*this);
3336  return mParent->offsetToGlobalCoord(BaseT::pos());
3337  }
3338  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3339  __hostdev__ bool isActive() const
3340  {
3341  NANOVDB_ASSERT(*this);
3342  return mParent->data()->isActive(BaseT::mPos);
3343  }
3344  }; // Member class ValueIterator
3345 
3348 
3349  /// @brief Visits active tile values of this node only
3350  class ValueOnIterator : public MaskIterT<true>
3351  {
3352  using BaseT = MaskIterT<true>;
3353  const InternalNode* mParent;
3354 
3355  public:
3357  : BaseT()
3358  , mParent(nullptr)
3359  {
3360  }
3362  : BaseT(parent->data()->mValueMask.beginOn())
3363  , mParent(parent)
3364  {
3365  }
3366  ValueOnIterator& operator=(const ValueOnIterator&) = default;
3368  {
3369  NANOVDB_ASSERT(*this);
3370  return mParent->data()->getValue(BaseT::pos());
3371  }
3373  {
3374  NANOVDB_ASSERT(*this);
3375  return mParent->offsetToGlobalCoord(BaseT::pos());
3376  }
3377  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3378  }; // Member class ValueOnIterator
3379 
3382 
3383  /// @brief Visits all tile values and child nodes of this node
3384  class DenseIterator : public Mask<Log2Dim>::DenseIterator
3385  {
3386  using BaseT = typename Mask<Log2Dim>::DenseIterator;
3387  const DataType* mParent;
3388 
3389  public:
3391  : BaseT()
3392  , mParent(nullptr)
3393  {
3394  }
3396  : BaseT(0)
3397  , mParent(parent->data())
3398  {
3399  }
3400  DenseIterator& operator=(const DenseIterator&) = default;
3401  __hostdev__ const ChildT* probeChild(ValueType& value) const
3402  {
3403  NANOVDB_ASSERT(mParent && bool(*this));
3404  const ChildT* child = nullptr;
3405  if (mParent->mChildMask.isOn(BaseT::pos())) {
3406  child = mParent->getChild(BaseT::pos());
3407  } else {
3408  value = mParent->getValue(BaseT::pos());
3409  }
3410  return child;
3411  }
3412  __hostdev__ bool isValueOn() const
3413  {
3414  NANOVDB_ASSERT(mParent && bool(*this));
3415  return mParent->isActive(BaseT::pos());
3416  }
3418  {
3419  NANOVDB_ASSERT(mParent && bool(*this));
3420  return mParent->offsetToGlobalCoord(BaseT::pos());
3421  }
3422  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3423  }; // Member class DenseIterator
3424 
3426  __hostdev__ DenseIterator cbeginChildAll() const { return DenseIterator(this); } // matches openvdb
3427 
3428  /// @brief This class cannot be constructed or deleted
3429  InternalNode() = delete;
3430  InternalNode(const InternalNode&) = delete;
3431  InternalNode& operator=(const InternalNode&) = delete;
3432  ~InternalNode() = delete;
3433 
3434  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3435 
3436  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3437 
3438  /// @brief Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32)
3439  __hostdev__ static uint32_t dim() { return 1u << TOTAL; }
3440 
3441  /// @brief Return memory usage in bytes for the class
3442  __hostdev__ static size_t memUsage() { return DataType::memUsage(); }
3443 
3444  /// @brief Return a const reference to the bit mask of active voxels in this internal node
3445  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
3446  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
3447 
3448  /// @brief Return a const reference to the bit mask of child nodes in this internal node
3449  __hostdev__ const MaskType<LOG2DIM>& childMask() const { return DataType::mChildMask; }
3450  __hostdev__ const MaskType<LOG2DIM>& getChildMask() const { return DataType::mChildMask; }
3451 
3452  /// @brief Return the origin in index space of this leaf node
3453  __hostdev__ CoordType origin() const { return DataType::mBBox.min() & ~MASK; }
3454 
3455  /// @brief Return a const reference to the minimum active value encoded in this internal node and any of its child nodes
3456  __hostdev__ const ValueType& minimum() const { return this->getMin(); }
3457 
3458  /// @brief Return a const reference to the maximum active value encoded in this internal node and any of its child nodes
3459  __hostdev__ const ValueType& maximum() const { return this->getMax(); }
3460 
3461  /// @brief Return a const reference to the average of all the active values encoded in this internal node and any of its child nodes
3462  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3463 
3464  /// @brief Return the variance of all the active values encoded in this internal node and any of its child nodes
3465  __hostdev__ FloatType variance() const { return DataType::mStdDevi * DataType::mStdDevi; }
3466 
3467  /// @brief Return a const reference to the standard deviation of all the active values encoded in this internal node and any of its child nodes
3468  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3469 
3470  /// @brief Return a const reference to the bounding box in index space of active values in this internal node and any of its child nodes
3471  __hostdev__ const math::BBox<CoordType>& bbox() const { return DataType::mBBox; }
3472 
3473  /// @brief If the first entry in this node's table is a tile, return the tile's value.
3474  /// Otherwise, return the result of calling getFirstValue() on the child.
3476  {
3477  return DataType::mChildMask.isOn(0) ? this->getChild(0)->getFirstValue() : DataType::getValue(0);
3478  }
3479 
3480  /// @brief If the last entry in this node's table is a tile, return the tile's value.
3481  /// Otherwise, return the result of calling getLastValue() on the child.
3483  {
3484  return DataType::mChildMask.isOn(SIZE - 1) ? this->getChild(SIZE - 1)->getLastValue() : DataType::getValue(SIZE - 1);
3485  }
3486 
3487  /// @brief Return the value of the given voxel
3488  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3489  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3490  /// @brief return the state and updates the value of the specified voxel
3491  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3492  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3493 
3495  {
3496  const uint32_t n = CoordToOffset(ijk);
3497  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3498  }
3500  {
3501  const uint32_t n = CoordToOffset(ijk);
3502  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3503  }
3504 
3505  /// @brief Return the linear offset corresponding to the given coordinate
3506  __hostdev__ static uint32_t CoordToOffset(const CoordType& ijk)
3507  {
3508  return (((ijk[0] & MASK) >> ChildT::TOTAL) << (2 * LOG2DIM)) | // note, we're using bitwise OR instead of +
3509  (((ijk[1] & MASK) >> ChildT::TOTAL) << (LOG2DIM)) |
3510  ((ijk[2] & MASK) >> ChildT::TOTAL);
3511  }
3512 
3513  /// @return the local coordinate of the n'th tile or child node
3514  __hostdev__ static Coord OffsetToLocalCoord(uint32_t n)
3515  {
3516  NANOVDB_ASSERT(n < SIZE);
3517  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
3518  return Coord(n >> 2 * LOG2DIM, m >> LOG2DIM, m & ((1 << LOG2DIM) - 1));
3519  }
3520 
3521  /// @brief modifies local coordinates to global coordinates of a tile or child node
3522  __hostdev__ void localToGlobalCoord(Coord& ijk) const
3523  {
3524  ijk <<= ChildT::TOTAL;
3525  ijk += this->origin();
3526  }
3527 
3528  __hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
3529  {
3530  Coord ijk = InternalNode::OffsetToLocalCoord(n);
3531  this->localToGlobalCoord(ijk);
3532  return ijk;
3533  }
3534 
3535  /// @brief Return true if this node or any of its child nodes contain active values
3536  __hostdev__ bool isActive() const { return DataType::mFlags & uint32_t(2); }
3537 
3538  template<typename OpT, typename... ArgsT>
3539  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3540  {
3541  const uint32_t n = CoordToOffset(ijk);
3542  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template get<OpT>(ijk, args...);
3543  return OpT::get(*this, n, args...);
3544  }
3545 
3546  template<typename OpT, typename... ArgsT>
3547  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3548  {
3549  const uint32_t n = CoordToOffset(ijk);
3550  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template set<OpT>(ijk, args...);
3551  return OpT::set(*this, n, args...);
3552  }
3553 
3554 private:
3555  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(InternalData) is misaligned");
3556 
3557  template<typename, int, int, int>
3558  friend class ReadAccessor;
3559 
3560  template<typename>
3561  friend class RootNode;
3562  template<typename, uint32_t>
3563  friend class InternalNode;
3564 
3565  template<typename RayT, typename AccT>
3566  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3567  {
3568  if (DataType::mFlags & uint32_t(1u))
3569  return this->dim(); // skip this node if the 1st bit is set
3570  //if (!ray.intersects( this->bbox() )) return 1<<TOTAL;
3571 
3572  const uint32_t n = CoordToOffset(ijk);
3573  if (DataType::mChildMask.isOn(n)) {
3574  const ChildT* child = this->getChild(n);
3575  acc.insert(ijk, child);
3576  return child->getDimAndCache(ijk, ray, acc);
3577  }
3578  return ChildNodeType::dim(); // tile value
3579  }
3580 
3581  template<typename OpT, typename AccT, typename... ArgsT>
3582  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3583  {
3584  const uint32_t n = CoordToOffset(ijk);
3585  if constexpr(OpT::LEVEL < LEVEL) {
3586  if (this->isChild(n)) {
3587  const ChildT* child = this->getChild(n);
3588  acc.insert(ijk, child);
3589  return child->template getAndCache<OpT>(ijk, acc, args...);
3590  }
3591  }
3592  return OpT::get(*this, n, args...);
3593  }
3594 
3595  template<typename OpT, typename AccT, typename... ArgsT>
3596  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3597  {
3598  const uint32_t n = CoordToOffset(ijk);
3599  if constexpr(OpT::LEVEL < LEVEL) {
3600  if (this->isChild(n)) {
3601  ChildT* child = this->getChild(n);
3602  acc.insert(ijk, child);
3603  return child->template setAndCache<OpT>(ijk, acc, args...);
3604  }
3605  }
3606  return OpT::set(*this, n, args...);
3607  }
3608 
3609 }; // InternalNode class
3610 
3611 // --------------------------> LeafData<T> <------------------------------------
3612 
3613 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3614 ///
3615 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3616 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3617 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData
3618 {
3619  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3620  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3621  using ValueType = ValueT;
3622  using BuildType = ValueT;
3624  using ArrayType = ValueT; // type used for the internal mValue array
3625  static constexpr bool FIXED_SIZE = true;
3626 
3627  CoordT mBBoxMin; // 12B.
3628  uint8_t mBBoxDif[3]; // 3B.
3629  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3630  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3631 
3632  ValueType mMinimum; // typically 4B
3633  ValueType mMaximum; // typically 4B
3634  FloatType mAverage; // typically 4B, average of all the active values in this node and its child nodes
3635  FloatType mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3636  alignas(32) ValueType mValues[1u << 3 * LOG2DIM];
3637 
3638  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3639  ///
3640  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3641  __hostdev__ static constexpr uint32_t padding()
3642  {
3643  return sizeof(LeafData) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * (sizeof(ValueT) + sizeof(FloatType)) + (1u << (3 * LOG2DIM)) * sizeof(ValueT));
3644  }
3645  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3646 
3647  __hostdev__ static bool hasStats() { return true; }
3648 
3649  __hostdev__ ValueType getValue(uint32_t i) const { return mValues[i]; }
3650  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& value) { mValues[offset] = value; }
3651  __hostdev__ void setValue(uint32_t offset, const ValueType& value)
3652  {
3653  mValueMask.setOn(offset);
3654  mValues[offset] = value;
3655  }
3656  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3657 
3658  __hostdev__ ValueType getMin() const { return mMinimum; }
3659  __hostdev__ ValueType getMax() const { return mMaximum; }
3660  __hostdev__ FloatType getAvg() const { return mAverage; }
3661  __hostdev__ FloatType getDev() const { return mStdDevi; }
3662 
3663 // GCC 11 (and possibly prior versions) has a regression that results in invalid
3664 // warnings when -Wstringop-overflow is turned on. For details, refer to
3665 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3666 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3667 #pragma GCC diagnostic push
3668 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3669 #endif
3670  __hostdev__ void setMin(const ValueType& v) { mMinimum = v; }
3671  __hostdev__ void setMax(const ValueType& v) { mMaximum = v; }
3672  __hostdev__ void setAvg(const FloatType& v) { mAverage = v; }
3673  __hostdev__ void setDev(const FloatType& v) { mStdDevi = v; }
3674 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3675 #pragma GCC diagnostic pop
3676 #endif
3677 
3678  template<typename T>
3679  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3680 
3681  __hostdev__ void fill(const ValueType& v)
3682  {
3683  for (auto *p = mValues, *q = p + 512; p != q; ++p)
3684  *p = v;
3685  }
3686 
3687  /// @brief This class cannot be constructed or deleted
3688  LeafData() = delete;
3689  LeafData(const LeafData&) = delete;
3690  LeafData& operator=(const LeafData&) = delete;
3691  ~LeafData() = delete;
3692 }; // LeafData<ValueT>
3693 
3694 // --------------------------> LeafFnBase <------------------------------------
3695 
3696 /// @brief Base-class for quantized float leaf nodes
3697 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3698 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafFnBase
3699 {
3700  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3701  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3702  using ValueType = float;
3703  using FloatType = float;
3704 
3705  CoordT mBBoxMin; // 12B.
3706  uint8_t mBBoxDif[3]; // 3B.
3707  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3708  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3709 
3710  float mMinimum; // 4B - minimum of ALL values in this node
3711  float mQuantum; // = (max - min)/15 4B
3712  uint16_t mMin, mMax, mAvg, mDev; // quantized representations of statistics of active values
3713  // no padding since it's always 32B aligned
3714  __hostdev__ static uint64_t memUsage() { return sizeof(LeafFnBase); }
3715 
3716  __hostdev__ static bool hasStats() { return true; }
3717 
3718  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3719  ///
3720  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3721  __hostdev__ static constexpr uint32_t padding()
3722  {
3723  return sizeof(LeafFnBase) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * 4 + 4 * 2);
3724  }
3725  __hostdev__ void init(float min, float max, uint8_t bitWidth)
3726  {
3727  mMinimum = min;
3728  mQuantum = (max - min) / float((1 << bitWidth) - 1);
3729  }
3730 
3731  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3732 
3733  /// @brief return the quantized minimum of the active values in this node
3734  __hostdev__ float getMin() const { return mMin * mQuantum + mMinimum; }
3735 
3736  /// @brief return the quantized maximum of the active values in this node
3737  __hostdev__ float getMax() const { return mMax * mQuantum + mMinimum; }
3738 
3739  /// @brief return the quantized average of the active values in this node
3740  __hostdev__ float getAvg() const { return mAvg * mQuantum + mMinimum; }
3741  /// @brief return the quantized standard deviation of the active values in this node
3742 
3743  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3744  __hostdev__ float getDev() const { return mDev * mQuantum; }
3745 
3746  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3747  __hostdev__ void setMin(float min) { mMin = uint16_t((min - mMinimum) / mQuantum + 0.5f); }
3748 
3749  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3750  __hostdev__ void setMax(float max) { mMax = uint16_t((max - mMinimum) / mQuantum + 0.5f); }
3751 
3752  /// @note min <= avg <= max or 0 <= (avg-min)/(min-max) <= 1
3753  __hostdev__ void setAvg(float avg) { mAvg = uint16_t((avg - mMinimum) / mQuantum + 0.5f); }
3754 
3755  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3756  __hostdev__ void setDev(float dev) { mDev = uint16_t(dev / mQuantum + 0.5f); }
3757 
3758  template<typename T>
3759  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3760 }; // LeafFnBase
3761 
3762 // --------------------------> LeafData<Fp4> <------------------------------------
3763 
3764 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3765 ///
3766 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3767 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3768 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp4, CoordT, MaskT, LOG2DIM>
3769  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3770 {
3772  using BuildType = Fp4;
3773  using ArrayType = uint8_t; // type used for the internal mValue array
3774  static constexpr bool FIXED_SIZE = true;
3775  alignas(32) uint8_t mCode[1u << (3 * LOG2DIM - 1)]; // LeafFnBase is 32B aligned and so is mCode
3776 
3777  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3778  __hostdev__ static constexpr uint32_t padding()
3779  {
3780  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3781  return sizeof(LeafData) - sizeof(BaseT) - (1u << (3 * LOG2DIM - 1));
3782  }
3783 
3784  __hostdev__ static constexpr uint8_t bitWidth() { return 4u; }
3785  __hostdev__ float getValue(uint32_t i) const
3786  {
3787 #if 0
3788  const uint8_t c = mCode[i>>1];
3789  return ( (i&1) ? c >> 4 : c & uint8_t(15) )*BaseT::mQuantum + BaseT::mMinimum;
3790 #else
3791  return ((mCode[i >> 1] >> ((i & 1) << 2)) & uint8_t(15)) * BaseT::mQuantum + BaseT::mMinimum;
3792 #endif
3793  }
3794 
3795  /// @brief This class cannot be constructed or deleted
3796  LeafData() = delete;
3797  LeafData(const LeafData&) = delete;
3798  LeafData& operator=(const LeafData&) = delete;
3799  ~LeafData() = delete;
3800 }; // LeafData<Fp4>
3801 
3802 // --------------------------> LeafBase<Fp8> <------------------------------------
3803 
3804 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3805 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp8, CoordT, MaskT, LOG2DIM>
3806  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3807 {
3809  using BuildType = Fp8;
3810  using ArrayType = uint8_t; // type used for the internal mValue array
3811  static constexpr bool FIXED_SIZE = true;
3812  alignas(32) uint8_t mCode[1u << 3 * LOG2DIM];
3813  __hostdev__ static constexpr int64_t memUsage() { return sizeof(LeafData); }
3814  __hostdev__ static constexpr uint32_t padding()
3815  {
3816  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3817  return sizeof(LeafData) - sizeof(BaseT) - (1u << 3 * LOG2DIM);
3818  }
3819 
3820  __hostdev__ static constexpr uint8_t bitWidth() { return 8u; }
3821  __hostdev__ float getValue(uint32_t i) const
3822  {
3823  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/255 + min
3824  }
3825  /// @brief This class cannot be constructed or deleted
3826  LeafData() = delete;
3827  LeafData(const LeafData&) = delete;
3828  LeafData& operator=(const LeafData&) = delete;
3829  ~LeafData() = delete;
3830 }; // LeafData<Fp8>
3831 
3832 // --------------------------> LeafData<Fp16> <------------------------------------
3833 
3834 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3835 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp16, CoordT, MaskT, LOG2DIM>
3836  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3837 {
3839  using BuildType = Fp16;
3840  using ArrayType = uint16_t; // type used for the internal mValue array
3841  static constexpr bool FIXED_SIZE = true;
3842  alignas(32) uint16_t mCode[1u << 3 * LOG2DIM];
3843 
3844  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3845  __hostdev__ static constexpr uint32_t padding()
3846  {
3847  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3848  return sizeof(LeafData) - sizeof(BaseT) - 2 * (1u << 3 * LOG2DIM);
3849  }
3850 
3851  __hostdev__ static constexpr uint8_t bitWidth() { return 16u; }
3852  __hostdev__ float getValue(uint32_t i) const
3853  {
3854  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/65535 + min
3855  }
3856 
3857  /// @brief This class cannot be constructed or deleted
3858  LeafData() = delete;
3859  LeafData(const LeafData&) = delete;
3860  LeafData& operator=(const LeafData&) = delete;
3861  ~LeafData() = delete;
3862 }; // LeafData<Fp16>
3863 
3864 // --------------------------> LeafData<FpN> <------------------------------------
3865 
3866 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3867 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<FpN, CoordT, MaskT, LOG2DIM>
3868  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3869 { // this class has no additional data members, however every instance is immediately followed by
3870  // bitWidth*64 bytes. Since its base class is 32B aligned so are the bitWidth*64 bytes
3872  using BuildType = FpN;
3873  static constexpr bool FIXED_SIZE = false;
3874  __hostdev__ static constexpr uint32_t padding()
3875  {
3876  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3877  return 0;
3878  }
3879 
3880  __hostdev__ uint8_t bitWidth() const { return 1 << (BaseT::mFlags >> 5); } // 4,8,16,32 = 2^(2,3,4,5)
3881  __hostdev__ size_t memUsage() const { return sizeof(*this) + this->bitWidth() * 64; }
3882  __hostdev__ static size_t memUsage(uint32_t bitWidth) { return 96u + bitWidth * 64; }
3883  __hostdev__ float getValue(uint32_t i) const
3884  {
3885 #ifdef NANOVDB_FPN_BRANCHLESS // faster
3886  const int b = BaseT::mFlags >> 5; // b = 0, 1, 2, 3, 4 corresponding to 1, 2, 4, 8, 16 bits
3887 #if 0 // use LUT
3888  uint16_t code = reinterpret_cast<const uint16_t*>(this + 1)[i >> (4 - b)];
3889  const static uint8_t shift[5] = {15, 7, 3, 1, 0};
3890  const static uint16_t mask[5] = {1, 3, 15, 255, 65535};
3891  code >>= (i & shift[b]) << b;
3892  code &= mask[b];
3893 #else // no LUT
3894  uint32_t code = reinterpret_cast<const uint32_t*>(this + 1)[i >> (5 - b)];
3895  code >>= (i & ((32 >> b) - 1)) << b;
3896  code &= (1 << (1 << b)) - 1;
3897 #endif
3898 #else // use branched version (slow)
3899  float code;
3900  auto* values = reinterpret_cast<const uint8_t*>(this + 1);
3901  switch (BaseT::mFlags >> 5) {
3902  case 0u: // 1 bit float
3903  code = float((values[i >> 3] >> (i & 7)) & uint8_t(1));
3904  break;
3905  case 1u: // 2 bits float
3906  code = float((values[i >> 2] >> ((i & 3) << 1)) & uint8_t(3));
3907  break;
3908  case 2u: // 4 bits float
3909  code = float((values[i >> 1] >> ((i & 1) << 2)) & uint8_t(15));
3910  break;
3911  case 3u: // 8 bits float
3912  code = float(values[i]);
3913  break;
3914  default: // 16 bits float
3915  code = float(reinterpret_cast<const uint16_t*>(values)[i]);
3916  }
3917 #endif
3918  return float(code) * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/UNITS + min
3919  }
3920 
3921  /// @brief This class cannot be constructed or deleted
3922  LeafData() = delete;
3923  LeafData(const LeafData&) = delete;
3924  LeafData& operator=(const LeafData&) = delete;
3925  ~LeafData() = delete;
3926 }; // LeafData<FpN>
3927 
3928 // --------------------------> LeafData<bool> <------------------------------------
3929 
3930 // Partial template specialization of LeafData with bool
3931 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3932 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<bool, CoordT, MaskT, LOG2DIM>
3933 {
3934  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3935  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3936  using ValueType = bool;
3937  using BuildType = bool;
3938  using FloatType = bool; // dummy value type
3939  using ArrayType = MaskT<LOG2DIM>; // type used for the internal mValue array
3940  static constexpr bool FIXED_SIZE = true;
3941 
3942  CoordT mBBoxMin; // 12B.
3943  uint8_t mBBoxDif[3]; // 3B.
3944  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3945  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3946  MaskT<LOG2DIM> mValues; // LOG2DIM(3): 64B.
3947  uint64_t mPadding[2]; // 16B padding to 32B alignment
3948 
3949  __hostdev__ static constexpr uint32_t padding() { return sizeof(LeafData) - 12u - 3u - 1u - 2 * sizeof(MaskT<LOG2DIM>) - 16u; }
3950  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3951  __hostdev__ static bool hasStats() { return false; }
3952  __hostdev__ bool getValue(uint32_t i) const { return mValues.isOn(i); }
3953  __hostdev__ bool getMin() const { return false; } // dummy
3954  __hostdev__ bool getMax() const { return false; } // dummy
3955  __hostdev__ bool getAvg() const { return false; } // dummy
3956  __hostdev__ bool getDev() const { return false; } // dummy
3957  __hostdev__ void setValue(uint32_t offset, bool v)
3958  {
3959  mValueMask.setOn(offset);
3960  mValues.set(offset, v);
3961  }
3962  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3963  __hostdev__ void setMin(const bool&) {} // no-op
3964  __hostdev__ void setMax(const bool&) {} // no-op
3965  __hostdev__ void setAvg(const bool&) {} // no-op
3966  __hostdev__ void setDev(const bool&) {} // no-op
3967 
3968  template<typename T>
3969  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3970 
3971  /// @brief This class cannot be constructed or deleted
3972  LeafData() = delete;
3973  LeafData(const LeafData&) = delete;
3974  LeafData& operator=(const LeafData&) = delete;
3975  ~LeafData() = delete;
3976 }; // LeafData<bool>
3977 
3978 // --------------------------> LeafData<ValueMask> <------------------------------------
3979 
3980 // Partial template specialization of LeafData with ValueMask
3981 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3982 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueMask, CoordT, MaskT, LOG2DIM>
3983 {
3984  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3985  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3986  using ValueType = bool;
3988  using FloatType = bool; // dummy value type
3989  using ArrayType = void; // type used for the internal mValue array - void means missing
3990  static constexpr bool FIXED_SIZE = true;
3991 
3992  CoordT mBBoxMin; // 12B.
3993  uint8_t mBBoxDif[3]; // 3B.
3994  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3995  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3996  uint64_t mPadding[2]; // 16B padding to 32B alignment
3997 
3998  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3999  __hostdev__ static bool hasStats() { return false; }
4000  __hostdev__ static constexpr uint32_t padding()
4001  {
4002  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4003  }
4004 
4005  __hostdev__ bool getValue(uint32_t i) const { return mValueMask.isOn(i); }
4006  __hostdev__ bool getMin() const { return false; } // dummy
4007  __hostdev__ bool getMax() const { return false; } // dummy
4008  __hostdev__ bool getAvg() const { return false; } // dummy
4009  __hostdev__ bool getDev() const { return false; } // dummy
4010  __hostdev__ void setValue(uint32_t offset, bool) { mValueMask.setOn(offset); }
4011  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4012  __hostdev__ void setMin(const ValueType&) {} // no-op
4013  __hostdev__ void setMax(const ValueType&) {} // no-op
4014  __hostdev__ void setAvg(const FloatType&) {} // no-op
4015  __hostdev__ void setDev(const FloatType&) {} // no-op
4016 
4017  template<typename T>
4018  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4019 
4020  /// @brief This class cannot be constructed or deleted
4021  LeafData() = delete;
4022  LeafData(const LeafData&) = delete;
4023  LeafData& operator=(const LeafData&) = delete;
4024  ~LeafData() = delete;
4025 }; // LeafData<ValueMask>
4026 
4027 // --------------------------> LeafIndexBase <------------------------------------
4028 
4029 // Partial template specialization of LeafData with ValueIndex
4030 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4031 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafIndexBase
4032 {
4033  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4034  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4035  using ValueType = uint64_t;
4036  using FloatType = uint64_t;
4037  using ArrayType = void; // type used for the internal mValue array - void means missing
4038  static constexpr bool FIXED_SIZE = true;
4039 
4040  CoordT mBBoxMin; // 12B.
4041  uint8_t mBBoxDif[3]; // 3B.
4042  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4043  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4044  uint64_t mOffset, mPrefixSum; // 8B offset to first value in this leaf node and 9-bit prefix sum
4045  __hostdev__ static constexpr uint32_t padding()
4046  {
4047  return sizeof(LeafIndexBase) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4048  }
4049  __hostdev__ static uint64_t memUsage() { return sizeof(LeafIndexBase); }
4050  __hostdev__ bool hasStats() const { return mFlags & (uint8_t(1) << 4); }
4051  // return the offset to the first value indexed by this leaf node
4052  __hostdev__ const uint64_t& firstOffset() const { return mOffset; }
4053  __hostdev__ void setMin(const ValueType&) {} // no-op
4054  __hostdev__ void setMax(const ValueType&) {} // no-op
4055  __hostdev__ void setAvg(const FloatType&) {} // no-op
4056  __hostdev__ void setDev(const FloatType&) {} // no-op
4057  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4058  template<typename T>
4059  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4060 
4061 protected:
4062  /// @brief This class should be used as an abstract class and only constructed or deleted via child classes
4063  LeafIndexBase() = default;
4064  LeafIndexBase(const LeafIndexBase&) = default;
4065  LeafIndexBase& operator=(const LeafIndexBase&) = default;
4066  ~LeafIndexBase() = default;
4067 }; // LeafIndexBase
4068 
4069 // --------------------------> LeafData<ValueIndex> <------------------------------------
4070 
4071 // Partial template specialization of LeafData with ValueIndex
4072 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4073 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4074  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4075 {
4078  // return the total number of values indexed by this leaf node, excluding the optional 4 stats
4079  __hostdev__ static uint32_t valueCount() { return uint32_t(512); } // 8^3 = 2^9
4080  // return the offset to the last value indexed by this leaf node (disregarding optional stats)
4081  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + 511u; } // 2^9 - 1
4082  // if stats are available, they are always placed after the last voxel value in this leaf node
4083  __hostdev__ uint64_t getMin() const { return this->hasStats() ? BaseT::mOffset + 512u : 0u; }
4084  __hostdev__ uint64_t getMax() const { return this->hasStats() ? BaseT::mOffset + 513u : 0u; }
4085  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? BaseT::mOffset + 514u : 0u; }
4086  __hostdev__ uint64_t getDev() const { return this->hasStats() ? BaseT::mOffset + 515u : 0u; }
4087  __hostdev__ uint64_t getValue(uint32_t i) const { return BaseT::mOffset + i; } // dense leaf node with active and inactive voxels
4088 }; // LeafData<ValueIndex>
4089 
4090 // --------------------------> LeafData<ValueOnIndex> <------------------------------------
4091 
4092 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4093 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4094  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4095 {
4098  __hostdev__ uint32_t valueCount() const
4099  {
4100  return util::countOn(BaseT::mValueMask.words()[7]) + (BaseT::mPrefixSum >> 54u & 511u); // last 9 bits of mPrefixSum do not account for the last word in mValueMask
4101  }
4102  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + this->valueCount() - 1u; }
4103  __hostdev__ uint64_t getMin() const { return this->hasStats() ? this->lastOffset() + 1u : 0u; }
4104  __hostdev__ uint64_t getMax() const { return this->hasStats() ? this->lastOffset() + 2u : 0u; }
4105  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? this->lastOffset() + 3u : 0u; }
4106  __hostdev__ uint64_t getDev() const { return this->hasStats() ? this->lastOffset() + 4u : 0u; }
4107  __hostdev__ uint64_t getValue(uint32_t i) const
4108  {
4109  //return mValueMask.isOn(i) ? mOffset + mValueMask.countOn(i) : 0u;// for debugging
4110  uint32_t n = i >> 6;
4111  const uint64_t w = BaseT::mValueMask.words()[n], mask = uint64_t(1) << (i & 63u);
4112  if (!(w & mask)) return uint64_t(0); // if i'th value is inactive return offset to background value
4113  uint64_t sum = BaseT::mOffset + util::countOn(w & (mask - 1u));
4114  if (n--) sum += BaseT::mPrefixSum >> (9u * n) & 511u;
4115  return sum;
4116  }
4117 }; // LeafData<ValueOnIndex>
4118 
4119 // --------------------------> LeafData<ValueIndexMask> <------------------------------------
4120 
4121 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4122 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndexMask, CoordT, MaskT, LOG2DIM>
4123  : public LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4124 {
4126  MaskT<LOG2DIM> mMask;
4127  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4128  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4129  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4130 }; // LeafData<ValueIndexMask>
4131 
4132 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4133 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndexMask, CoordT, MaskT, LOG2DIM>
4134  : public LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4135 {
4137  MaskT<LOG2DIM> mMask;
4138  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4139  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4140  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4141 }; // LeafData<ValueOnIndexMask>
4142 
4143 // --------------------------> LeafData<Point> <------------------------------------
4144 
4145 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4146 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Point, CoordT, MaskT, LOG2DIM>
4147 {
4148  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4149  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4150  using ValueType = uint64_t;
4151  using BuildType = Point;
4153  using ArrayType = uint16_t; // type used for the internal mValue array
4154  static constexpr bool FIXED_SIZE = true;
4155 
4156  CoordT mBBoxMin; // 12B.
4157  uint8_t mBBoxDif[3]; // 3B.
4158  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4159  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4160 
4161  uint64_t mOffset; // 8B
4162  uint64_t mPointCount; // 8B
4163  alignas(32) uint16_t mValues[1u << 3 * LOG2DIM]; // 1KB
4164  // no padding
4165 
4166  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4167  ///
4168  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4169  __hostdev__ static constexpr uint32_t padding()
4170  {
4171  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u + (1u << 3 * LOG2DIM) * 2u);
4172  }
4173  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4174 
4175  __hostdev__ uint64_t offset() const { return mOffset; }
4176  __hostdev__ uint64_t pointCount() const { return mPointCount; }
4177  __hostdev__ uint64_t first(uint32_t i) const { return i ? uint64_t(mValues[i - 1u]) + mOffset : mOffset; }
4178  __hostdev__ uint64_t last(uint32_t i) const { return uint64_t(mValues[i]) + mOffset; }
4179  __hostdev__ uint64_t getValue(uint32_t i) const { return uint64_t(mValues[i]); }
4180  __hostdev__ void setValueOnly(uint32_t offset, uint16_t value) { mValues[offset] = value; }
4181  __hostdev__ void setValue(uint32_t offset, uint16_t value)
4182  {
4183  mValueMask.setOn(offset);
4184  mValues[offset] = value;
4185  }
4186  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4187 
4188  __hostdev__ ValueType getMin() const { return mOffset; }
4189  __hostdev__ ValueType getMax() const { return mPointCount; }
4190  __hostdev__ FloatType getAvg() const { return 0.0f; }
4191  __hostdev__ FloatType getDev() const { return 0.0f; }
4192 
4193  __hostdev__ void setMin(const ValueType&) {}
4194  __hostdev__ void setMax(const ValueType&) {}
4195  __hostdev__ void setAvg(const FloatType&) {}
4196  __hostdev__ void setDev(const FloatType&) {}
4197 
4198  template<typename T>
4199  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4200 
4201  /// @brief This class cannot be constructed or deleted
4202  LeafData() = delete;
4203  LeafData(const LeafData&) = delete;
4204  LeafData& operator=(const LeafData&) = delete;
4205  ~LeafData() = delete;
4206 }; // LeafData<Point>
4207 
4208 // --------------------------> LeafNode<T> <------------------------------------
4209 
4210 /// @brief Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
4211 template<typename BuildT,
4212  typename CoordT = Coord,
4213  template<uint32_t> class MaskT = Mask,
4214  uint32_t Log2Dim = 3>
4215 class LeafNode : public LeafData<BuildT, CoordT, MaskT, Log2Dim>
4216 {
4217 public:
4219  {
4220  static constexpr uint32_t TOTAL = 0;
4221  static constexpr uint32_t DIM = 1;
4222  __hostdev__ static uint32_t dim() { return 1u; }
4223  }; // Voxel
4226  using ValueType = typename DataType::ValueType;
4227  using FloatType = typename DataType::FloatType;
4228  using BuildType = typename DataType::BuildType;
4229  using CoordType = CoordT;
4230  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4231  template<uint32_t LOG2>
4232  using MaskType = MaskT<LOG2>;
4233  template<bool ON>
4234  using MaskIterT = typename Mask<Log2Dim>::template Iterator<ON>;
4235 
4236  /// @brief Visits all active values in a leaf node
4237  class ValueOnIterator : public MaskIterT<true>
4238  {
4239  using BaseT = MaskIterT<true>;
4240  const LeafNode* mParent;
4241 
4242  public:
4244  : BaseT()
4245  , mParent(nullptr)
4246  {
4247  }
4249  : BaseT(parent->data()->mValueMask.beginOn())
4250  , mParent(parent)
4251  {
4252  }
4253  ValueOnIterator& operator=(const ValueOnIterator&) = default;
4255  {
4256  NANOVDB_ASSERT(*this);
4257  return mParent->getValue(BaseT::pos());
4258  }
4259  __hostdev__ CoordT getCoord() const
4260  {
4261  NANOVDB_ASSERT(*this);
4262  return mParent->offsetToGlobalCoord(BaseT::pos());
4263  }
4264  }; // Member class ValueOnIterator
4265 
4266  __hostdev__ ValueOnIterator beginValueOn() const { return ValueOnIterator(this); }
4267  __hostdev__ ValueOnIterator cbeginValueOn() const { return ValueOnIterator(this); }
4268 
4269  /// @brief Visits all inactive values in a leaf node
4270  class ValueOffIterator : public MaskIterT<false>
4271  {
4272  using BaseT = MaskIterT<false>;
4273  const LeafNode* mParent;
4274 
4275  public:
4277  : BaseT()
4278  , mParent(nullptr)
4279  {
4280  }
4282  : BaseT(parent->data()->mValueMask.beginOff())
4283  , mParent(parent)
4284  {
4285  }
4286  ValueOffIterator& operator=(const ValueOffIterator&) = default;
4288  {
4289  NANOVDB_ASSERT(*this);
4290  return mParent->getValue(BaseT::pos());
4291  }
4292  __hostdev__ CoordT getCoord() const
4293  {
4294  NANOVDB_ASSERT(*this);
4295  return mParent->offsetToGlobalCoord(BaseT::pos());
4296  }
4297  }; // Member class ValueOffIterator
4298 
4299  __hostdev__ ValueOffIterator beginValueOff() const { return ValueOffIterator(this); }
4300  __hostdev__ ValueOffIterator cbeginValueOff() const { return ValueOffIterator(this); }
4301 
4302  /// @brief Visits all values in a leaf node, i.e. both active and inactive values
4304  {
4305  const LeafNode* mParent;
4306  uint32_t mPos;
4307 
4308  public:
4310  : mParent(nullptr)
4311  , mPos(1u << 3 * Log2Dim)
4312  {
4313  }
4315  : mParent(parent)
4316  , mPos(0)
4317  {
4318  NANOVDB_ASSERT(parent);
4319  }
4320  ValueIterator& operator=(const ValueIterator&) = default;
4322  {
4323  NANOVDB_ASSERT(*this);
4324  return mParent->getValue(mPos);
4325  }
4326  __hostdev__ CoordT getCoord() const
4327  {
4328  NANOVDB_ASSERT(*this);
4329  return mParent->offsetToGlobalCoord(mPos);
4330  }
4331  __hostdev__ bool isActive() const
4332  {
4333  NANOVDB_ASSERT(*this);
4334  return mParent->isActive(mPos);
4335  }
4336  __hostdev__ operator bool() const { return mPos < (1u << 3 * Log2Dim); }
4338  {
4339  ++mPos;
4340  return *this;
4341  }
4343  {
4344  auto tmp = *this;
4345  ++(*this);
4346  return tmp;
4347  }
4348  }; // Member class ValueIterator
4349 
4350  __hostdev__ ValueIterator beginValue() const { return ValueIterator(this); }
4351  __hostdev__ ValueIterator cbeginValueAll() const { return ValueIterator(this); }
4352 
4353  static_assert(util::is_same<ValueType, typename BuildToValueMap<BuildType>::Type>::value, "Mismatching BuildType");
4354  static constexpr uint32_t LOG2DIM = Log2Dim;
4355  static constexpr uint32_t TOTAL = LOG2DIM; // needed by parent nodes
4356  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
4357  static constexpr uint32_t SIZE = 1u << 3 * LOG2DIM; // total number of voxels represented by this node
4358  static constexpr uint32_t MASK = (1u << LOG2DIM) - 1u; // mask for bit operations
4359  static constexpr uint32_t LEVEL = 0; // level 0 = leaf
4360  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
4361 
4362  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4363 
4364  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4365 
4366  /// @brief Return a const reference to the bit mask of active voxels in this leaf node
4367  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
4368  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
4369 
4370  /// @brief Return a const reference to the minimum active value encoded in this leaf node
4371  __hostdev__ ValueType minimum() const { return DataType::getMin(); }
4372 
4373  /// @brief Return a const reference to the maximum active value encoded in this leaf node
4374  __hostdev__ ValueType maximum() const { return DataType::getMax(); }
4375 
4376  /// @brief Return a const reference to the average of all the active values encoded in this leaf node
4377  __hostdev__ FloatType average() const { return DataType::getAvg(); }
4378 
4379  /// @brief Return the variance of all the active values encoded in this leaf node
4380  __hostdev__ FloatType variance() const { return Pow2(DataType::getDev()); }
4381 
4382  /// @brief Return a const reference to the standard deviation of all the active values encoded in this leaf node
4383  __hostdev__ FloatType stdDeviation() const { return DataType::getDev(); }
4384 
4385  __hostdev__ uint8_t flags() const { return DataType::mFlags; }
4386 
4387  /// @brief Return the origin in index space of this leaf node
4388  __hostdev__ CoordT origin() const { return DataType::mBBoxMin & ~MASK; }
4389 
4390  /// @brief Compute the local coordinates from a linear offset
4391  /// @param n Linear offset into this nodes dense table
4392  /// @return Local (vs global) 3D coordinates
4393  __hostdev__ static CoordT OffsetToLocalCoord(uint32_t n)
4394  {
4395  NANOVDB_ASSERT(n < SIZE);
4396  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
4397  return CoordT(n >> 2 * LOG2DIM, m >> LOG2DIM, m & MASK);
4398  }
4399 
4400  /// @brief Converts (in place) a local index coordinate to a global index coordinate
4401  __hostdev__ void localToGlobalCoord(Coord& ijk) const { ijk += this->origin(); }
4402 
4403  __hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
4404  {
4405  return OffsetToLocalCoord(n) + this->origin();
4406  }
4407 
4408  /// @brief Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!)
4409  __hostdev__ static uint32_t dim() { return 1u << LOG2DIM; }
4410 
4411  /// @brief Return the bounding box in index space of active values in this leaf node
4412  __hostdev__ math::BBox<CoordT> bbox() const
4413  {
4414  math::BBox<CoordT> bbox(DataType::mBBoxMin, DataType::mBBoxMin);
4415  if (this->hasBBox()) {
4416  bbox.max()[0] += DataType::mBBoxDif[0];
4417  bbox.max()[1] += DataType::mBBoxDif[1];
4418  bbox.max()[2] += DataType::mBBoxDif[2];
4419  } else { // very rare case
4420  bbox = math::BBox<CoordT>(); // invalid
4421  }
4422  return bbox;
4423  }
4424 
4425  /// @brief Return the total number of voxels (e.g. values) encoded in this leaf node
4426  __hostdev__ static uint32_t voxelCount() { return 1u << (3 * LOG2DIM); }
4427 
4428  __hostdev__ static uint32_t padding() { return DataType::padding(); }
4429 
4430  /// @brief return memory usage in bytes for the leaf node
4431  __hostdev__ uint64_t memUsage() const { return DataType::memUsage(); }
4432 
4433  /// @brief This class cannot be constructed or deleted
4434  LeafNode() = delete;
4435  LeafNode(const LeafNode&) = delete;
4436  LeafNode& operator=(const LeafNode&) = delete;
4437  ~LeafNode() = delete;
4438 
4439  /// @brief Return the voxel value at the given offset.
4440  __hostdev__ ValueType getValue(uint32_t offset) const { return DataType::getValue(offset); }
4441 
4442  /// @brief Return the voxel value at the given coordinate.
4443  __hostdev__ ValueType getValue(const CoordT& ijk) const { return DataType::getValue(CoordToOffset(ijk)); }
4444 
4445  /// @brief Return the first value in this leaf node.
4446  __hostdev__ ValueType getFirstValue() const { return this->getValue(0); }
4447  /// @brief Return the last value in this leaf node.
4448  __hostdev__ ValueType getLastValue() const { return this->getValue(SIZE - 1); }
4449 
4450  /// @brief Sets the value at the specified location and activate its state.
4451  ///
4452  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4453  __hostdev__ void setValue(const CoordT& ijk, const ValueType& v) { DataType::setValue(CoordToOffset(ijk), v); }
4454 
4455  /// @brief Sets the value at the specified location but leaves its state unchanged.
4456  ///
4457  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4458  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& v) { DataType::setValueOnly(offset, v); }
4459  __hostdev__ void setValueOnly(const CoordT& ijk, const ValueType& v) { DataType::setValueOnly(CoordToOffset(ijk), v); }
4460 
4461  /// @brief Return @c true if the voxel value at the given coordinate is active.
4462  __hostdev__ bool isActive(const CoordT& ijk) const { return DataType::mValueMask.isOn(CoordToOffset(ijk)); }
4463  __hostdev__ bool isActive(uint32_t n) const { return DataType::mValueMask.isOn(n); }
4464 
4465  /// @brief Return @c true if any of the voxel value are active in this leaf node.
4466  __hostdev__ bool isActive() const
4467  {
4468  //NANOVDB_ASSERT( bool(DataType::mFlags & uint8_t(2)) != DataType::mValueMask.isOff() );
4469  //return DataType::mFlags & uint8_t(2);
4470  return !DataType::mValueMask.isOff();
4471  }
4472 
4473  __hostdev__ bool hasBBox() const { return DataType::mFlags & uint8_t(2); }
4474 
4475  /// @brief Return @c true if the voxel value at the given coordinate is active and updates @c v with the value.
4476  __hostdev__ bool probeValue(const CoordT& ijk, ValueType& v) const
4477  {
4478  const uint32_t n = CoordToOffset(ijk);
4479  v = DataType::getValue(n);
4480  return DataType::mValueMask.isOn(n);
4481  }
4482 
4483  __hostdev__ const LeafNode* probeLeaf(const CoordT&) const { return this; }
4484 
4485  /// @brief Return the linear offset corresponding to the given coordinate
4486  __hostdev__ static uint32_t CoordToOffset(const CoordT& ijk)
4487  {
4488  return ((ijk[0] & MASK) << (2 * LOG2DIM)) | ((ijk[1] & MASK) << LOG2DIM) | (ijk[2] & MASK);
4489  }
4490 
4491  /// @brief Updates the local bounding box of active voxels in this node. Return true if bbox was updated.
4492  ///
4493  /// @warning It assumes that the origin and value mask have already been set.
4494  ///
4495  /// @details This method is based on few (intrinsic) bit operations and hence is relatively fast.
4496  /// However, it should only only be called if either the value mask has changed or if the
4497  /// active bounding box is still undefined. e.g. during construction of this node.
4498  __hostdev__ bool updateBBox();
4499 
4500  template<typename OpT, typename... ArgsT>
4501  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4502  {
4503  return OpT::get(*this, CoordToOffset(ijk), args...);
4504  }
4505 
4506  template<typename OpT, typename... ArgsT>
4507  __hostdev__ auto get(const uint32_t n, ArgsT&&... args) const
4508  {
4509  return OpT::get(*this, n, args...);
4510  }
4511 
4512  template<typename OpT, typename... ArgsT>
4513  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
4514  {
4515  return OpT::set(*this, CoordToOffset(ijk), args...);
4516  }
4517 
4518  template<typename OpT, typename... ArgsT>
4519  __hostdev__ auto set(const uint32_t n, ArgsT&&... args)
4520  {
4521  return OpT::set(*this, n, args...);
4522  }
4523 
4524 private:
4525  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(LeafData) is misaligned");
4526 
4527  template<typename, int, int, int>
4528  friend class ReadAccessor;
4529 
4530  template<typename>
4531  friend class RootNode;
4532  template<typename, uint32_t>
4533  friend class InternalNode;
4534 
4535  template<typename RayT, typename AccT>
4536  __hostdev__ uint32_t getDimAndCache(const CoordT&, const RayT& /*ray*/, const AccT&) const
4537  {
4538  if (DataType::mFlags & uint8_t(1u))
4539  return this->dim(); // skip this node if the 1st bit is set
4540 
4541  //if (!ray.intersects( this->bbox() )) return 1 << LOG2DIM;
4542  return ChildNodeType::dim();
4543  }
4544 
4545  template<typename OpT, typename AccT, typename... ArgsT>
4546  __hostdev__ auto
4547  //__hostdev__ decltype(OpT::get(util::declval<const LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4548  getAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args) const
4549  {
4550  return OpT::get(*this, CoordToOffset(ijk), args...);
4551  }
4552 
4553  template<typename OpT, typename AccT, typename... ArgsT>
4554  //__hostdev__ auto // occasionally fails with NVCC
4555  __hostdev__ decltype(OpT::set(util::declval<LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4556  setAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args)
4557  {
4558  return OpT::set(*this, CoordToOffset(ijk), args...);
4559  }
4560 
4561 }; // LeafNode class
4562 
4563 // --------------------------> LeafNode<T>::updateBBox <------------------------------------
4564 
4565 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4567 {
4568  static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
4569  if (DataType::mValueMask.isOff()) {
4570  DataType::mFlags &= ~uint8_t(2); // set 2nd bit off, which indicates that this nodes has no bbox
4571  return false;
4572  }
4573  auto update = [&](uint32_t min, uint32_t max, int axis) {
4574  NANOVDB_ASSERT(min <= max && max < 8);
4575  DataType::mBBoxMin[axis] = (DataType::mBBoxMin[axis] & ~MASK) + int(min);
4576  DataType::mBBoxDif[axis] = uint8_t(max - min);
4577  };
4578  uint64_t *w = DataType::mValueMask.words(), word64 = *w;
4579  uint32_t Xmin = word64 ? 0u : 8u, Xmax = Xmin;
4580  for (int i = 1; i < 8; ++i) { // last loop over 7 remaining 64 bit words
4581  if (w[i]) { // skip if word has no set bits
4582  word64 |= w[i]; // union 8 x 64 bits words into one 64 bit word
4583  if (Xmin == 8)
4584  Xmin = i; // only set once
4585  Xmax = i;
4586  }
4587  }
4588  NANOVDB_ASSERT(word64);
4589  update(Xmin, Xmax, 0);
4590  update(util::findLowestOn(word64) >> 3, util::findHighestOn(word64) >> 3, 1);
4591  const uint32_t *p = reinterpret_cast<const uint32_t*>(&word64), word32 = p[0] | p[1];
4592  const uint16_t *q = reinterpret_cast<const uint16_t*>(&word32), word16 = q[0] | q[1];
4593  const uint8_t *b = reinterpret_cast<const uint8_t*>(&word16), byte = b[0] | b[1];
4594  NANOVDB_ASSERT(byte);
4595  update(util::findLowestOn(static_cast<uint32_t>(byte)), util::findHighestOn(static_cast<uint32_t>(byte)), 2);
4596  DataType::mFlags |= uint8_t(2); // set 2nd bit on, which indicates that this nodes has a bbox
4597  return true;
4598 } // LeafNode::updateBBox
4599 
4600 // --------------------------> Template specializations and traits <------------------------------------
4601 
4602 /// @brief Template specializations to the default configuration used in OpenVDB:
4603 /// Root -> 32^3 -> 16^3 -> 8^3
4604 template<typename BuildT>
4606 template<typename BuildT>
4608 template<typename BuildT>
4610 template<typename BuildT>
4612 template<typename BuildT>
4614 template<typename BuildT>
4616 
4617 /// @brief Trait to map from LEVEL to node type
4618 template<typename BuildT, int LEVEL>
4619 struct NanoNode;
4620 
4621 // Partial template specialization of above Node struct
4622 template<typename BuildT>
4623 struct NanoNode<BuildT, 0>
4624 {
4627 };
4628 template<typename BuildT>
4629 struct NanoNode<BuildT, 1>
4630 {
4633 };
4634 template<typename BuildT>
4635 struct NanoNode<BuildT, 2>
4636 {
4639 };
4640 template<typename BuildT>
4641 struct NanoNode<BuildT, 3>
4642 {
4645 };
4646 
4667 
4689 
4690 // --------------------------> callNanoGrid <------------------------------------
4691 
4692 /**
4693 * @brief Below is an example of the struct used for generic programming with callNanoGrid
4694 * @details For an example see "struct Crc32TailOld" in nanovdb/tools/GridChecksum.h or
4695 * "struct IsNanoGridValid" in nanovdb/tools/GridValidator.h
4696 * @code
4697 * struct OpT {
4698  // define these two static functions with non-const GridData
4699 * template <typename BuildT>
4700 * static auto known( GridData *gridData, args...);
4701 * static auto unknown( GridData *gridData, args...);
4702 * // or alternatively these two static functions with const GridData
4703 * template <typename BuildT>
4704 * static auto known(const GridData *gridData, args...);
4705 * static auto unknown(const GridData *gridData, args...);
4706 * };
4707 * @endcode
4708 *
4709 * @brief Here is an example of how to use callNanoGrid in client code
4710 * @code
4711 * return callNanoGrid<OpT>(gridData, args...);
4712 * @endcode
4713 */
4714 
4715 /// @brief Use this function, which depends a pointer to GridData, to call
4716 /// other functions that depend on a NanoGrid of a known ValueType.
4717 /// @details This function allows for generic programming by converting GridData
4718 /// to a NanoGrid of the type encoded in GridData::mGridType.
4719 template<typename OpT, typename GridDataT, typename... ArgsT>
4720 auto callNanoGrid(GridDataT *gridData, ArgsT&&... args)
4721 {
4722  static_assert(util::is_same<GridDataT, GridData, const GridData>::value, "Expected gridData to be of type GridData* or const GridData*");
4723  switch (gridData->mGridType){
4724  case GridType::Float:
4725  return OpT::template known<float>(gridData, args...);
4726  case GridType::Double:
4727  return OpT::template known<double>(gridData, args...);
4728  case GridType::Int16:
4729  return OpT::template known<int16_t>(gridData, args...);
4730  case GridType::Int32:
4731  return OpT::template known<int32_t>(gridData, args...);
4732  case GridType::Int64:
4733  return OpT::template known<int64_t>(gridData, args...);
4734  case GridType::Vec3f:
4735  return OpT::template known<Vec3f>(gridData, args...);
4736  case GridType::Vec3d:
4737  return OpT::template known<Vec3d>(gridData, args...);
4738  case GridType::UInt32:
4739  return OpT::template known<uint32_t>(gridData, args...);
4740  case GridType::Mask:
4741  return OpT::template known<ValueMask>(gridData, args...);
4742  case GridType::Index:
4743  return OpT::template known<ValueIndex>(gridData, args...);
4744  case GridType::OnIndex:
4745  return OpT::template known<ValueOnIndex>(gridData, args...);
4746  case GridType::IndexMask:
4747  return OpT::template known<ValueIndexMask>(gridData, args...);
4748  case GridType::OnIndexMask:
4749  return OpT::template known<ValueOnIndexMask>(gridData, args...);
4750  case GridType::Boolean:
4751  return OpT::template known<bool>(gridData, args...);
4752  case GridType::RGBA8:
4753  return OpT::template known<math::Rgba8>(gridData, args...);
4754  case GridType::Fp4:
4755  return OpT::template known<Fp4>(gridData, args...);
4756  case GridType::Fp8:
4757  return OpT::template known<Fp8>(gridData, args...);
4758  case GridType::Fp16:
4759  return OpT::template known<Fp16>(gridData, args...);
4760  case GridType::FpN:
4761  return OpT::template known<FpN>(gridData, args...);
4762  case GridType::Vec4f:
4763  return OpT::template known<Vec4f>(gridData, args...);
4764  case GridType::Vec4d:
4765  return OpT::template known<Vec4d>(gridData, args...);
4766  case GridType::UInt8:
4767  return OpT::template known<uint8_t>(gridData, args...);
4768  default:
4769  return OpT::unknown(gridData, args...);
4770  }
4771 }// callNanoGrid
4772 
4773 // --------------------------> ReadAccessor <------------------------------------
4774 
4775 /// @brief A read-only value accessor with three levels of node caching. This allows for
4776 /// inverse tree traversal during lookup, which is on average significantly faster
4777 /// than calling the equivalent method on the tree (i.e. top-down traversal).
4778 ///
4779 /// @note By virtue of the fact that a value accessor accelerates random access operations
4780 /// by re-using cached access patterns, this access should be reused for multiple access
4781 /// operations. In other words, never create an instance of this accessor for a single
4782 /// access only. In general avoid single access operations with this accessor, and
4783 /// if that is not possible call the corresponding method on the tree instead.
4784 ///
4785 /// @warning Since this ReadAccessor internally caches raw pointers to the nodes of the tree
4786 /// structure, it is not safe to copy between host and device, or even to share among
4787 /// multiple threads on the same host or device. However, it is light-weight so simple
4788 /// instantiate one per thread (on the host and/or device).
4789 ///
4790 /// @details Used to accelerated random access into a VDB tree. Provides on average
4791 /// O(1) random access operations by means of inverse tree traversal,
4792 /// which amortizes the non-const time complexity of the root node.
4793 
4794 template<typename BuildT>
4795 class ReadAccessor<BuildT, -1, -1, -1>
4796 {
4797  using GridT = NanoGrid<BuildT>; // grid
4798  using TreeT = NanoTree<BuildT>; // tree
4799  using RootT = NanoRoot<BuildT>; // root node
4800  using LeafT = NanoLeaf<BuildT>; // Leaf node
4801  using FloatType = typename RootT::FloatType;
4802  using CoordValueType = typename RootT::CoordType::ValueType;
4803 
4804  mutable const RootT* mRoot; // 8 bytes (mutable to allow for access methods to be const)
4805 public:
4806  using BuildType = BuildT;
4807  using ValueType = typename RootT::ValueType;
4808  using CoordType = typename RootT::CoordType;
4809 
4810  static const int CacheLevels = 0;
4811 
4812  /// @brief Constructor from a root node
4814  : mRoot{&root}
4815  {
4816  }
4817 
4818  /// @brief Constructor from a grid
4819  __hostdev__ ReadAccessor(const GridT& grid)
4820  : ReadAccessor(grid.tree().root())
4821  {
4822  }
4823 
4824  /// @brief Constructor from a tree
4826  : ReadAccessor(tree.root())
4827  {
4828  }
4829 
4830  /// @brief Reset this access to its initial state, i.e. with an empty cache
4831  /// @node Noop since this template specialization has no cache
4832  __hostdev__ void clear() {}
4833 
4834  __hostdev__ const RootT& root() const { return *mRoot; }
4835 
4836  /// @brief Defaults constructors
4837  ReadAccessor(const ReadAccessor&) = default;
4838  ~ReadAccessor() = default;
4839  ReadAccessor& operator=(const ReadAccessor&) = default;
4841  {
4842  return this->template get<GetValue<BuildT>>(ijk);
4843  }
4844  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4845  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4846  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4847  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4848  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4849  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4850  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4851  template<typename RayT>
4852  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4853  {
4854  return mRoot->getDimAndCache(ijk, ray, *this);
4855  }
4856  template<typename OpT, typename... ArgsT>
4857  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4858  {
4859  return mRoot->template get<OpT>(ijk, args...);
4860  }
4861 
4862  template<typename OpT, typename... ArgsT>
4863  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
4864  {
4865  return const_cast<RootT*>(mRoot)->template set<OpT>(ijk, args...);
4866  }
4867 
4868 private:
4869  /// @brief Allow nodes to insert themselves into the cache.
4870  template<typename>
4871  friend class RootNode;
4872  template<typename, uint32_t>
4873  friend class InternalNode;
4874  template<typename, typename, template<uint32_t> class, uint32_t>
4875  friend class LeafNode;
4876 
4877  /// @brief No-op
4878  template<typename NodeT>
4879  __hostdev__ void insert(const CoordType&, const NodeT*) const {}
4880 }; // ReadAccessor<ValueT, -1, -1, -1> class
4881 
4882 /// @brief Node caching at a single tree level
4883 template<typename BuildT, int LEVEL0>
4884 class ReadAccessor<BuildT, LEVEL0, -1, -1> //e.g. 0, 1, 2
4885 {
4886  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 should be 0, 1, or 2");
4887 
4888  using GridT = NanoGrid<BuildT>; // grid
4889  using TreeT = NanoTree<BuildT>;
4890  using RootT = NanoRoot<BuildT>; // root node
4891  using LeafT = NanoLeaf<BuildT>; // Leaf node
4892  using NodeT = typename NodeTrait<TreeT, LEVEL0>::type;
4893  using CoordT = typename RootT::CoordType;
4894  using ValueT = typename RootT::ValueType;
4895 
4896  using FloatType = typename RootT::FloatType;
4897  using CoordValueType = typename RootT::CoordT::ValueType;
4898 
4899  // All member data are mutable to allow for access methods to be const
4900  mutable CoordT mKey; // 3*4 = 12 bytes
4901  mutable const RootT* mRoot; // 8 bytes
4902  mutable const NodeT* mNode; // 8 bytes
4903 
4904 public:
4905  using BuildType = BuildT;
4906  using ValueType = ValueT;
4907  using CoordType = CoordT;
4908 
4909  static const int CacheLevels = 1;
4910 
4911  /// @brief Constructor from a root node
4913  : mKey(CoordType::max())
4914  , mRoot(&root)
4915  , mNode(nullptr)
4916  {
4917  }
4918 
4919  /// @brief Constructor from a grid
4921  : ReadAccessor(grid.tree().root())
4922  {
4923  }
4924 
4925  /// @brief Constructor from a tree
4927  : ReadAccessor(tree.root())
4928  {
4929  }
4930 
4931  /// @brief Reset this access to its initial state, i.e. with an empty cache
4933  {
4934  mKey = CoordType::max();
4935  mNode = nullptr;
4936  }
4937 
4938  __hostdev__ const RootT& root() const { return *mRoot; }
4939 
4940  /// @brief Defaults constructors
4941  ReadAccessor(const ReadAccessor&) = default;
4942  ~ReadAccessor() = default;
4943  ReadAccessor& operator=(const ReadAccessor&) = default;
4944 
4945  __hostdev__ bool isCached(const CoordType& ijk) const
4946  {
4947  return (ijk[0] & int32_t(~NodeT::MASK)) == mKey[0] &&
4948  (ijk[1] & int32_t(~NodeT::MASK)) == mKey[1] &&
4949  (ijk[2] & int32_t(~NodeT::MASK)) == mKey[2];
4950  }
4951 
4953  {
4954  return this->template get<GetValue<BuildT>>(ijk);
4955  }
4956  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4957  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4958  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4959  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4960  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4961  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4962  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4963 
4964  template<typename RayT>
4965  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4966  {
4967  if (this->isCached(ijk)) return mNode->getDimAndCache(ijk, ray, *this);
4968  return mRoot->getDimAndCache(ijk, ray, *this);
4969  }
4970 
4971  template<typename OpT, typename... ArgsT>
4972  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
4973  {
4974  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return mNode->template getAndCache<OpT>(ijk, *this, args...);
4975  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
4976  }
4977 
4978  template<typename OpT, typename... ArgsT>
4979  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
4980  {
4981  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return const_cast<NodeT*>(mNode)->template setAndCache<OpT>(ijk, *this, args...);
4982  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
4983  }
4984 
4985 private:
4986  /// @brief Allow nodes to insert themselves into the cache.
4987  template<typename>
4988  friend class RootNode;
4989  template<typename, uint32_t>
4990  friend class InternalNode;
4991  template<typename, typename, template<uint32_t> class, uint32_t>
4992  friend class LeafNode;
4993 
4994  /// @brief Inserts a leaf node and key pair into this ReadAccessor
4995  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
4996  {
4997  mKey = ijk & ~NodeT::MASK;
4998  mNode = node;
4999  }
5000 
5001  // no-op
5002  template<typename OtherNodeT>
5003  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5004 
5005 }; // ReadAccessor<ValueT, LEVEL0>
5006 
5007 template<typename BuildT, int LEVEL0, int LEVEL1>
5008 class ReadAccessor<BuildT, LEVEL0, LEVEL1, -1> //e.g. (0,1), (1,2), (0,2)
5009 {
5010  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 must be 0, 1, 2");
5011  static_assert(LEVEL1 >= 0 && LEVEL1 <= 2, "LEVEL1 must be 0, 1, 2");
5012  static_assert(LEVEL0 < LEVEL1, "Level 0 must be lower than level 1");
5013  using GridT = NanoGrid<BuildT>; // grid
5014  using TreeT = NanoTree<BuildT>;
5015  using RootT = NanoRoot<BuildT>;
5016  using LeafT = NanoLeaf<BuildT>;
5017  using Node1T = typename NodeTrait<TreeT, LEVEL0>::type;
5018  using Node2T = typename NodeTrait<TreeT, LEVEL1>::type;
5019  using CoordT = typename RootT::CoordType;
5020  using ValueT = typename RootT::ValueType;
5021  using FloatType = typename RootT::FloatType;
5022  using CoordValueType = typename RootT::CoordT::ValueType;
5023 
5024  // All member data are mutable to allow for access methods to be const
5025 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5026  mutable CoordT mKey; // 3*4 = 12 bytes
5027 #else // 68 bytes total
5028  mutable CoordT mKeys[2]; // 2*3*4 = 24 bytes
5029 #endif
5030  mutable const RootT* mRoot;
5031  mutable const Node1T* mNode1;
5032  mutable const Node2T* mNode2;
5033 
5034 public:
5035  using BuildType = BuildT;
5036  using ValueType = ValueT;
5037  using CoordType = CoordT;
5038 
5039  static const int CacheLevels = 2;
5040 
5041  /// @brief Constructor from a root node
5043 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5044  : mKey(CoordType::max())
5045 #else
5046  : mKeys{CoordType::max(), CoordType::max()}
5047 #endif
5048  , mRoot(&root)
5049  , mNode1(nullptr)
5050  , mNode2(nullptr)
5051  {
5052  }
5053 
5054  /// @brief Constructor from a grid
5056  : ReadAccessor(grid.tree().root())
5057  {
5058  }
5059 
5060  /// @brief Constructor from a tree
5062  : ReadAccessor(tree.root())
5063  {
5064  }
5065 
5066  /// @brief Reset this access to its initial state, i.e. with an empty cache
5068  {
5069 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5070  mKey = CoordType::max();
5071 #else
5072  mKeys[0] = mKeys[1] = CoordType::max();
5073 #endif
5074  mNode1 = nullptr;
5075  mNode2 = nullptr;
5076  }
5077 
5078  __hostdev__ const RootT& root() const { return *mRoot; }
5079 
5080  /// @brief Defaults constructors
5081  ReadAccessor(const ReadAccessor&) = default;
5082  ~ReadAccessor() = default;
5083  ReadAccessor& operator=(const ReadAccessor&) = default;
5084 
5085 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5086  __hostdev__ bool isCached1(CoordValueType dirty) const
5087  {
5088  if (!mNode1)
5089  return false;
5090  if (dirty & int32_t(~Node1T::MASK)) {
5091  mNode1 = nullptr;
5092  return false;
5093  }
5094  return true;
5095  }
5096  __hostdev__ bool isCached2(CoordValueType dirty) const
5097  {
5098  if (!mNode2)
5099  return false;
5100  if (dirty & int32_t(~Node2T::MASK)) {
5101  mNode2 = nullptr;
5102  return false;
5103  }
5104  return true;
5105  }
5106  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5107  {
5108  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5109  }
5110 #else
5111  __hostdev__ bool isCached1(const CoordType& ijk) const
5112  {
5113  return (ijk[0] & int32_t(~Node1T::MASK)) == mKeys[0][0] &&
5114  (ijk[1] & int32_t(~Node1T::MASK)) == mKeys[0][1] &&
5115  (ijk[2] & int32_t(~Node1T::MASK)) == mKeys[0][2];
5116  }
5117  __hostdev__ bool isCached2(const CoordType& ijk) const
5118  {
5119  return (ijk[0] & int32_t(~Node2T::MASK)) == mKeys[1][0] &&
5120  (ijk[1] & int32_t(~Node2T::MASK)) == mKeys[1][1] &&
5121  (ijk[2] & int32_t(~Node2T::MASK)) == mKeys[1][2];
5122  }
5123 #endif
5124 
5126  {
5127  return this->template get<GetValue<BuildT>>(ijk);
5128  }
5129  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5130  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5131  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5132  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5133  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5134  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5135  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5136 
5137  template<typename RayT>
5138  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5139  {
5140 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5141  const CoordValueType dirty = this->computeDirty(ijk);
5142 #else
5143  auto&& dirty = ijk;
5144 #endif
5145  if (this->isCached1(dirty)) {
5146  return mNode1->getDimAndCache(ijk, ray, *this);
5147  } else if (this->isCached2(dirty)) {
5148  return mNode2->getDimAndCache(ijk, ray, *this);
5149  }
5150  return mRoot->getDimAndCache(ijk, ray, *this);
5151  }
5152 
5153  template<typename OpT, typename... ArgsT>
5154  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5155  {
5156 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5157  const CoordValueType dirty = this->computeDirty(ijk);
5158 #else
5159  auto&& dirty = ijk;
5160 #endif
5161  if constexpr(OpT::LEVEL <= LEVEL0) {
5162  if (this->isCached1(dirty)) return mNode1->template getAndCache<OpT>(ijk, *this, args...);
5163  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5164  if (this->isCached2(dirty)) return mNode2->template getAndCache<OpT>(ijk, *this, args...);
5165  }
5166  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5167  }
5168 
5169  template<typename OpT, typename... ArgsT>
5170  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5171  {
5172 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5173  const CoordValueType dirty = this->computeDirty(ijk);
5174 #else
5175  auto&& dirty = ijk;
5176 #endif
5177  if constexpr(OpT::LEVEL <= LEVEL0) {
5178  if (this->isCached1(dirty)) return const_cast<Node1T*>(mNode1)->template setAndCache<OpT>(ijk, *this, args...);
5179  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5180  if (this->isCached2(dirty)) return const_cast<Node2T*>(mNode2)->template setAndCache<OpT>(ijk, *this, args...);
5181  }
5182  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5183  }
5184 
5185 private:
5186  /// @brief Allow nodes to insert themselves into the cache.
5187  template<typename>
5188  friend class RootNode;
5189  template<typename, uint32_t>
5190  friend class InternalNode;
5191  template<typename, typename, template<uint32_t> class, uint32_t>
5192  friend class LeafNode;
5193 
5194  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5195  __hostdev__ void insert(const CoordType& ijk, const Node1T* node) const
5196  {
5197 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5198  mKey = ijk;
5199 #else
5200  mKeys[0] = ijk & ~Node1T::MASK;
5201 #endif
5202  mNode1 = node;
5203  }
5204  __hostdev__ void insert(const CoordType& ijk, const Node2T* node) const
5205  {
5206 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5207  mKey = ijk;
5208 #else
5209  mKeys[1] = ijk & ~Node2T::MASK;
5210 #endif
5211  mNode2 = node;
5212  }
5213  template<typename OtherNodeT>
5214  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5215 }; // ReadAccessor<BuildT, LEVEL0, LEVEL1>
5216 
5217 /// @brief Node caching at all (three) tree levels
5218 template<typename BuildT>
5219 class ReadAccessor<BuildT, 0, 1, 2>
5220 {
5221  using GridT = NanoGrid<BuildT>; // grid
5222  using TreeT = NanoTree<BuildT>;
5223  using RootT = NanoRoot<BuildT>; // root node
5224  using NodeT2 = NanoUpper<BuildT>; // upper internal node
5225  using NodeT1 = NanoLower<BuildT>; // lower internal node
5226  using LeafT = NanoLeaf<BuildT>; // Leaf node
5227  using CoordT = typename RootT::CoordType;
5228  using ValueT = typename RootT::ValueType;
5229 
5230  using FloatType = typename RootT::FloatType;
5231  using CoordValueType = typename RootT::CoordT::ValueType;
5232 
5233  // All member data are mutable to allow for access methods to be const
5234 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5235  mutable CoordT mKey; // 3*4 = 12 bytes
5236 #else // 68 bytes total
5237  mutable CoordT mKeys[3]; // 3*3*4 = 36 bytes
5238 #endif
5239  mutable const RootT* mRoot;
5240  mutable const void* mNode[3]; // 4*8 = 32 bytes
5241 
5242 public:
5243  using BuildType = BuildT;
5244  using ValueType = ValueT;
5245  using CoordType = CoordT;
5246 
5247  static const int CacheLevels = 3;
5248 
5249  /// @brief Constructor from a root node
5251 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5252  : mKey(CoordType::max())
5253 #else
5255 #endif
5256  , mRoot(&root)
5257  , mNode{nullptr, nullptr, nullptr}
5258  {
5259  }
5260 
5261  /// @brief Constructor from a grid
5262  __hostdev__ ReadAccessor(const GridT& grid)
5263  : ReadAccessor(grid.tree().root())
5264  {
5265  }
5266 
5267  /// @brief Constructor from a tree
5269  : ReadAccessor(tree.root())
5270  {
5271  }
5272 
5273  __hostdev__ const RootT& root() const { return *mRoot; }
5274 
5275  /// @brief Defaults constructors
5276  ReadAccessor(const ReadAccessor&) = default;
5277  ~ReadAccessor() = default;
5278  ReadAccessor& operator=(const ReadAccessor&) = default;
5279 
5280  /// @brief Return a const point to the cached node of the specified type
5281  ///
5282  /// @warning The return value could be NULL.
5283  template<typename NodeT>
5284  __hostdev__ const NodeT* getNode() const
5285  {
5286  using T = typename NodeTrait<TreeT, NodeT::LEVEL>::type;
5287  static_assert(util::is_same<T, NodeT>::value, "ReadAccessor::getNode: Invalid node type");
5288  return reinterpret_cast<const T*>(mNode[NodeT::LEVEL]);
5289  }
5290 
5291  template<int LEVEL>
5293  {
5294  using T = typename NodeTrait<TreeT, LEVEL>::type;
5295  static_assert(LEVEL >= 0 && LEVEL <= 2, "ReadAccessor::getNode: Invalid node type");
5296  return reinterpret_cast<const T*>(mNode[LEVEL]);
5297  }
5298 
5299  /// @brief Reset this access to its initial state, i.e. with an empty cache
5301  {
5302 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5303  mKey = CoordType::max();
5304 #else
5305  mKeys[0] = mKeys[1] = mKeys[2] = CoordType::max();
5306 #endif
5307  mNode[0] = mNode[1] = mNode[2] = nullptr;
5308  }
5309 
5310 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5311  template<typename NodeT>
5312  __hostdev__ bool isCached(CoordValueType dirty) const
5313  {
5314  if (!mNode[NodeT::LEVEL])
5315  return false;
5316  if (dirty & int32_t(~NodeT::MASK)) {
5317  mNode[NodeT::LEVEL] = nullptr;
5318  return false;
5319  }
5320  return true;
5321  }
5322 
5323  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5324  {
5325  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5326  }
5327 #else
5328  template<typename NodeT>
5329  __hostdev__ bool isCached(const CoordType& ijk) const
5330  {
5331  return (ijk[0] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][0] &&
5332  (ijk[1] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][1] &&
5333  (ijk[2] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][2];
5334  }
5335 #endif
5336 
5337  __hostdev__ ValueType getValue(const CoordType& ijk) const {return this->template get<GetValue<BuildT>>(ijk);}
5338  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5339  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5340  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5341  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5342  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5343  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5344  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5345 
5346  template<typename OpT, typename... ArgsT>
5347  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5348  {
5349 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5350  const CoordValueType dirty = this->computeDirty(ijk);
5351 #else
5352  auto&& dirty = ijk;
5353 #endif
5354  if constexpr(OpT::LEVEL <=0) {
5355  if (this->isCached<LeafT>(dirty)) return ((const LeafT*)mNode[0])->template getAndCache<OpT>(ijk, *this, args...);
5356  } else if constexpr(OpT::LEVEL <= 1) {
5357  if (this->isCached<NodeT1>(dirty)) return ((const NodeT1*)mNode[1])->template getAndCache<OpT>(ijk, *this, args...);
5358  } else if constexpr(OpT::LEVEL <= 2) {
5359  if (this->isCached<NodeT2>(dirty)) return ((const NodeT2*)mNode[2])->template getAndCache<OpT>(ijk, *this, args...);
5360  }
5361  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5362  }
5363 
5364  template<typename OpT, typename... ArgsT>
5365  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5366  {
5367 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5368  const CoordValueType dirty = this->computeDirty(ijk);
5369 #else
5370  auto&& dirty = ijk;
5371 #endif
5372  if constexpr(OpT::LEVEL <= 0) {
5373  if (this->isCached<LeafT>(dirty)) return ((LeafT*)mNode[0])->template setAndCache<OpT>(ijk, *this, args...);
5374  } else if constexpr(OpT::LEVEL <= 1) {
5375  if (this->isCached<NodeT1>(dirty)) return ((NodeT1*)mNode[1])->template setAndCache<OpT>(ijk, *this, args...);
5376  } else if constexpr(OpT::LEVEL <= 2) {
5377  if (this->isCached<NodeT2>(dirty)) return ((NodeT2*)mNode[2])->template setAndCache<OpT>(ijk, *this, args...);
5378  }
5379  return ((RootT*)mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5380  }
5381 
5382  template<typename RayT>
5383  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5384  {
5385 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5386  const CoordValueType dirty = this->computeDirty(ijk);
5387 #else
5388  auto&& dirty = ijk;
5389 #endif
5390  if (this->isCached<LeafT>(dirty)) {
5391  return ((LeafT*)mNode[0])->getDimAndCache(ijk, ray, *this);
5392  } else if (this->isCached<NodeT1>(dirty)) {
5393  return ((NodeT1*)mNode[1])->getDimAndCache(ijk, ray, *this);
5394  } else if (this->isCached<NodeT2>(dirty)) {
5395  return ((NodeT2*)mNode[2])->getDimAndCache(ijk, ray, *this);
5396  }
5397  return mRoot->getDimAndCache(ijk, ray, *this);
5398  }
5399 
5400 private:
5401  /// @brief Allow nodes to insert themselves into the cache.
5402  template<typename>
5403  friend class RootNode;
5404  template<typename, uint32_t>
5405  friend class InternalNode;
5406  template<typename, typename, template<uint32_t> class, uint32_t>
5407  friend class LeafNode;
5408 
5409  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5410  template<typename NodeT>
5411  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
5412  {
5413 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5414  mKey = ijk;
5415 #else
5416  mKeys[NodeT::LEVEL] = ijk & ~NodeT::MASK;
5417 #endif
5418  mNode[NodeT::LEVEL] = node;
5419  }
5420 }; // ReadAccessor<BuildT, 0, 1, 2>
5421 
5422 //////////////////////////////////////////////////
5423 
5424 /// @brief Free-standing function for convenient creation of a ReadAccessor with
5425 /// optional and customizable node caching.
5426 ///
5427 /// @details createAccessor<>(grid): No caching of nodes and hence it's thread-safe but slow
5428 /// createAccessor<0>(grid): Caching of leaf nodes only
5429 /// createAccessor<1>(grid): Caching of lower internal nodes only
5430 /// createAccessor<2>(grid): Caching of upper internal nodes only
5431 /// createAccessor<0,1>(grid): Caching of leaf and lower internal nodes
5432 /// createAccessor<0,2>(grid): Caching of leaf and upper internal nodes
5433 /// createAccessor<1,2>(grid): Caching of lower and upper internal nodes
5434 /// createAccessor<0,1,2>(grid): Caching of all nodes at all tree levels
5435 
5436 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5438 {
5440 }
5441 
5442 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5444 {
5446 }
5447 
5448 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5450 {
5452 }
5453 
5454 //////////////////////////////////////////////////
5455 
5456 /// @brief This is a convenient class that allows for access to grid meta-data
5457 /// that are independent of the value type of a grid. That is, this class
5458 /// can be used to get information about a grid without actually knowing
5459 /// its ValueType.
5461 { // 768 bytes (32 byte aligned)
5462  GridData mGridData; // 672B
5463  TreeData mTreeData; // 64B
5464  CoordBBox mIndexBBox; // 24B. AABB of active values in index space.
5465  uint32_t mRootTableSize, mPadding{0}; // 8B
5466 
5467 public:
5468  template<typename T>
5470  {
5471  mGridData = *grid.data();
5472  mTreeData = *grid.tree().data();
5473  mIndexBBox = grid.indexBBox();
5474  mRootTableSize = grid.tree().root().getTableSize();
5475  }
5476  GridMetaData(const GridData* gridData)
5477  {
5478  if (GridMetaData::safeCast(gridData)) {
5479  *this = *reinterpret_cast<const GridMetaData*>(gridData);
5480  //util::memcpy(this, (const GridMetaData*)gridData);
5481  } else {// otherwise copy each member individually
5482  mGridData = *gridData;
5483  mTreeData = *reinterpret_cast<const TreeData*>(gridData->treePtr());
5484  mIndexBBox = gridData->indexBBox();
5485  mRootTableSize = gridData->rootTableSize();
5486  }
5487  }
5488  GridMetaData& operator=(const GridMetaData&) = default;
5489  /// @brief return true if the RootData follows right after the TreeData.
5490  /// If so, this implies that it's safe to cast the grid from which
5491  /// this instance was constructed to a GridMetaData
5492  __hostdev__ bool safeCast() const { return mTreeData.isRootNext(); }
5493 
5494  /// @brief return true if it is safe to cast the grid to a pointer
5495  /// of type GridMetaData, i.e. construction can be avoided.
5496  __hostdev__ static bool safeCast(const GridData *gridData){
5497  NANOVDB_ASSERT(gridData && gridData->isValid());
5498  return gridData->isRootConnected();
5499  }
5500  /// @brief return true if it is safe to cast the grid to a pointer
5501  /// of type GridMetaData, i.e. construction can be avoided.
5502  template<typename T>
5503  __hostdev__ static bool safeCast(const NanoGrid<T>& grid){return grid.tree().isRootNext();}
5504  __hostdev__ bool isValid() const { return mGridData.isValid(); }
5505  __hostdev__ const GridType& gridType() const { return mGridData.mGridType; }
5506  __hostdev__ const GridClass& gridClass() const { return mGridData.mGridClass; }
5507  __hostdev__ bool isLevelSet() const { return mGridData.mGridClass == GridClass::LevelSet; }
5508  __hostdev__ bool isFogVolume() const { return mGridData.mGridClass == GridClass::FogVolume; }
5509  __hostdev__ bool isStaggered() const { return mGridData.mGridClass == GridClass::Staggered; }
5510  __hostdev__ bool isPointIndex() const { return mGridData.mGridClass == GridClass::PointIndex; }
5511  __hostdev__ bool isGridIndex() const { return mGridData.mGridClass == GridClass::IndexGrid; }
5512  __hostdev__ bool isPointData() const { return mGridData.mGridClass == GridClass::PointData; }
5513  __hostdev__ bool isMask() const { return mGridData.mGridClass == GridClass::Topology; }
5514  __hostdev__ bool isUnknown() const { return mGridData.mGridClass == GridClass::Unknown; }
5515  __hostdev__ bool hasMinMax() const { return mGridData.mFlags.isMaskOn(GridFlags::HasMinMax); }
5516  __hostdev__ bool hasBBox() const { return mGridData.mFlags.isMaskOn(GridFlags::HasBBox); }
5517  __hostdev__ bool hasLongGridName() const { return mGridData.mFlags.isMaskOn(GridFlags::HasLongGridName); }
5518  __hostdev__ bool hasAverage() const { return mGridData.mFlags.isMaskOn(GridFlags::HasAverage); }
5519  __hostdev__ bool hasStdDeviation() const { return mGridData.mFlags.isMaskOn(GridFlags::HasStdDeviation); }
5520  __hostdev__ bool isBreadthFirst() const { return mGridData.mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
5521  __hostdev__ uint64_t gridSize() const { return mGridData.mGridSize; }
5522  __hostdev__ uint32_t gridIndex() const { return mGridData.mGridIndex; }
5523  __hostdev__ uint32_t gridCount() const { return mGridData.mGridCount; }
5524  __hostdev__ const char* shortGridName() const { return mGridData.mGridName; }
5525  __hostdev__ const Map& map() const { return mGridData.mMap; }
5526  __hostdev__ const Vec3dBBox& worldBBox() const { return mGridData.mWorldBBox; }
5527  __hostdev__ const CoordBBox& indexBBox() const { return mIndexBBox; }
5528  __hostdev__ Vec3d voxelSize() const { return mGridData.mVoxelSize; }
5529  __hostdev__ int blindDataCount() const { return mGridData.mBlindMetadataCount; }
5530  __hostdev__ uint64_t activeVoxelCount() const { return mTreeData.mVoxelCount; }
5531  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const { return mTreeData.mTileCount[level - 1]; }
5532  __hostdev__ uint32_t nodeCount(uint32_t level) const { return mTreeData.mNodeCount[level]; }
5533  __hostdev__ const Checksum& checksum() const { return mGridData.mChecksum; }
5534  __hostdev__ uint32_t rootTableSize() const { return mRootTableSize; }
5535  __hostdev__ bool isEmpty() const { return mRootTableSize == 0; }
5536  __hostdev__ Version version() const { return mGridData.mVersion; }
5537 }; // GridMetaData
5538 
5539 /// @brief Class to access points at a specific voxel location
5540 ///
5541 /// @note If GridClass::PointIndex AttT should be uint32_t and if GridClass::PointData Vec3f
5542 template<typename AttT, typename BuildT = uint32_t>
5543 class PointAccessor : public DefaultReadAccessor<BuildT>
5544 {
5545  using AccT = DefaultReadAccessor<BuildT>;
5546  const NanoGrid<BuildT>& mGrid;
5547  const AttT* mData;
5548 
5549 public:
5551  : AccT(grid.tree().root())
5552  , mGrid(grid)
5553  , mData(grid.template getBlindData<AttT>(0))
5554  {
5555  NANOVDB_ASSERT(grid.gridType() == toGridType<BuildT>());
5558  }
5559 
5560  /// @brief return true if this access was initialized correctly
5561  __hostdev__ operator bool() const { return mData != nullptr; }
5562 
5563  __hostdev__ const NanoGrid<BuildT>& grid() const { return mGrid; }
5564 
5565  /// @brief Return the total number of point in the grid and set the
5566  /// iterators to the complete range of points.
5567  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5568  {
5569  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5570  begin = mData;
5571  end = begin + count;
5572  return count;
5573  }
5574  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5575  /// If this return value is larger than zero then the iterators @a begin and @a end
5576  /// will point to all the attributes contained within that leaf node.
5577  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5578  {
5579  auto* leaf = this->probeLeaf(ijk);
5580  if (leaf == nullptr) {
5581  return 0;
5582  }
5583  begin = mData + leaf->minimum();
5584  end = begin + leaf->maximum();
5585  return leaf->maximum();
5586  }
5587 
5588  /// @brief get iterators over attributes to points at a specific voxel location
5589  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5590  {
5591  begin = end = nullptr;
5592  if (auto* leaf = this->probeLeaf(ijk)) {
5593  const uint32_t offset = NanoLeaf<BuildT>::CoordToOffset(ijk);
5594  if (leaf->isActive(offset)) {
5595  begin = mData + leaf->minimum();
5596  end = begin + leaf->getValue(offset);
5597  if (offset > 0u)
5598  begin += leaf->getValue(offset - 1);
5599  }
5600  }
5601  return end - begin;
5602  }
5603 }; // PointAccessor
5604 
5605 template<typename AttT>
5606 class PointAccessor<AttT, Point> : public DefaultReadAccessor<Point>
5607 {
5608  using AccT = DefaultReadAccessor<Point>;
5609  const NanoGrid<Point>& mGrid;
5610  const AttT* mData;
5611 
5612 public:
5614  : AccT(grid.tree().root())
5615  , mGrid(grid)
5616  , mData(grid.template getBlindData<AttT>(0))
5617  {
5618  NANOVDB_ASSERT(mData);
5625  }
5626 
5627  /// @brief return true if this access was initialized correctly
5628  __hostdev__ operator bool() const { return mData != nullptr; }
5629 
5630  __hostdev__ const NanoGrid<Point>& grid() const { return mGrid; }
5631 
5632  /// @brief Return the total number of point in the grid and set the
5633  /// iterators to the complete range of points.
5634  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5635  {
5636  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5637  begin = mData;
5638  end = begin + count;
5639  return count;
5640  }
5641  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5642  /// If this return value is larger than zero then the iterators @a begin and @a end
5643  /// will point to all the attributes contained within that leaf node.
5644  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5645  {
5646  auto* leaf = this->probeLeaf(ijk);
5647  if (leaf == nullptr)
5648  return 0;
5649  begin = mData + leaf->offset();
5650  end = begin + leaf->pointCount();
5651  return leaf->pointCount();
5652  }
5653 
5654  /// @brief get iterators over attributes to points at a specific voxel location
5655  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5656  {
5657  if (auto* leaf = this->probeLeaf(ijk)) {
5658  const uint32_t n = NanoLeaf<Point>::CoordToOffset(ijk);
5659  if (leaf->isActive(n)) {
5660  begin = mData + leaf->first(n);
5661  end = mData + leaf->last(n);
5662  return end - begin;
5663  }
5664  }
5665  begin = end = nullptr;
5666  return 0u; // no leaf or inactive voxel
5667  }
5668 }; // PointAccessor<AttT, Point>
5669 
5670 /// @brief Class to access values in channels at a specific voxel location.
5671 ///
5672 /// @note The ChannelT template parameter can be either const and non-const.
5673 template<typename ChannelT, typename IndexT = ValueIndex>
5674 class ChannelAccessor : public DefaultReadAccessor<IndexT>
5675 {
5676  static_assert(BuildTraits<IndexT>::is_index, "Expected an index build type");
5678 
5679  const NanoGrid<IndexT>& mGrid;
5680  ChannelT* mChannel;
5681 
5682 public:
5683  using ValueType = ChannelT;
5686 
5687  /// @brief Ctor from an IndexGrid and an integer ID of an internal channel
5688  /// that is assumed to exist as blind data in the IndexGrid.
5689  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, uint32_t channelID = 0u)
5690  : BaseT(grid.tree().root())
5691  , mGrid(grid)
5692  , mChannel(nullptr)
5693  {
5694  NANOVDB_ASSERT(isIndex(grid.gridType()));
5696  this->setChannel(channelID);
5697  }
5698 
5699  /// @brief Ctor from an IndexGrid and an external channel
5700  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, ChannelT* channelPtr)
5701  : BaseT(grid.tree().root())
5702  , mGrid(grid)
5703  , mChannel(channelPtr)
5704  {
5705  NANOVDB_ASSERT(isIndex(grid.gridType()));
5707  }
5708 
5709  /// @brief return true if this access was initialized correctly
5710  __hostdev__ operator bool() const { return mChannel != nullptr; }
5711 
5712  /// @brief Return a const reference to the IndexGrid
5713  __hostdev__ const NanoGrid<IndexT>& grid() const { return mGrid; }
5714 
5715  /// @brief Return a const reference to the tree of the IndexGrid
5716  __hostdev__ const TreeType& tree() const { return mGrid.tree(); }
5717 
5718  /// @brief Return a vector of the axial voxel sizes
5719  __hostdev__ const Vec3d& voxelSize() const { return mGrid.voxelSize(); }
5720 
5721  /// @brief Return total number of values indexed by the IndexGrid
5722  __hostdev__ const uint64_t& valueCount() const { return mGrid.valueCount(); }
5723 
5724  /// @brief Change to an external channel
5725  /// @return Pointer to channel data
5726  __hostdev__ ChannelT* setChannel(ChannelT* channelPtr) {return mChannel = channelPtr;}
5727 
5728  /// @brief Change to an internal channel, assuming it exists as as blind data
5729  /// in the IndexGrid.
5730  /// @return Pointer to channel data, which could be NULL if channelID is out of range or
5731  /// if ChannelT does not match the value type of the blind data
5732  __hostdev__ ChannelT* setChannel(uint32_t channelID)
5733  {
5734  return mChannel = const_cast<ChannelT*>(mGrid.template getBlindData<ChannelT>(channelID));
5735  }
5736 
5737  /// @brief Return the linear offset into a channel that maps to the specified coordinate
5738  __hostdev__ uint64_t getIndex(const math::Coord& ijk) const { return BaseT::getValue(ijk); }
5739  __hostdev__ uint64_t idx(int i, int j, int k) const { return BaseT::getValue(math::Coord(i, j, k)); }
5740 
5741  /// @brief Return the value from a cached channel that maps to the specified coordinate
5742  __hostdev__ ChannelT& getValue(const math::Coord& ijk) const { return mChannel[BaseT::getValue(ijk)]; }
5743  __hostdev__ ChannelT& operator()(const math::Coord& ijk) const { return this->getValue(ijk); }
5744  __hostdev__ ChannelT& operator()(int i, int j, int k) const { return this->getValue(math::Coord(i, j, k)); }
5745 
5746  /// @brief return the state and updates the value of the specified voxel
5747  __hostdev__ bool probeValue(const math::Coord& ijk, typename util::remove_const<ChannelT>::type& v) const
5748  {
5749  uint64_t idx;
5750  const bool isActive = BaseT::probeValue(ijk, idx);
5751  v = mChannel[idx];
5752  return isActive;
5753  }
5754  /// @brief Return the value from a specified channel that maps to the specified coordinate
5755  ///
5756  /// @note The template parameter can be either const or non-const
5757  template<typename T>
5758  __hostdev__ T& getValue(const math::Coord& ijk, T* channelPtr) const { return channelPtr[BaseT::getValue(ijk)]; }
5759 
5760 }; // ChannelAccessor
5761 
5762 #if 0
5763 // This MiniGridHandle class is only included as a stand-alone example. Note that aligned_alloc is a C++17 feature!
5764 // Normally we recommend using GridHandle defined in util/GridHandle.h but this minimal implementation could be an
5765 // alternative when using the IO methods defined below.
5766 struct MiniGridHandle {
5767  struct BufferType {
5768  uint8_t *data;
5769  uint64_t size;
5770  BufferType(uint64_t n=0) : data(std::aligned_alloc(NANOVDB_DATA_ALIGNMENT, n)), size(n) {assert(isValid(data));}
5771  BufferType(BufferType &&other) : data(other.data), size(other.size) {other.data=nullptr; other.size=0;}
5772  ~BufferType() {std::free(data);}
5773  BufferType& operator=(const BufferType &other) = delete;
5774  BufferType& operator=(BufferType &&other){data=other.data; size=other.size; other.data=nullptr; other.size=0; return *this;}
5775  static BufferType create(size_t n, BufferType* dummy = nullptr) {return BufferType(n);}
5776  } buffer;
5777  MiniGridHandle(BufferType &&buf) : buffer(std::move(buf)) {}
5778  const uint8_t* data() const {return buffer.data;}
5779 };// MiniGridHandle
5780 #endif
5781 
5782 namespace io {
5783 
5784 /// @brief Define compression codecs
5785 ///
5786 /// @note NONE is the default, ZIP is slow but compact and BLOSC offers a great balance.
5787 ///
5788 /// @throw NanoVDB optionally supports ZIP and BLOSC compression and will throw an exception
5789 /// if its support is required but missing.
5790 enum class Codec : uint16_t { NONE = 0,
5791  ZIP = 1,
5792  BLOSC = 2,
5793  End = 3,
5794  StrLen = 6 + End };
5795 
5796 __hostdev__ inline const char* toStr(char *dst, Codec codec)
5797 {
5798  switch (codec){
5799  case Codec::NONE: return util::strcpy(dst, "NONE");
5800  case Codec::ZIP: return util::strcpy(dst, "ZIP");
5801  case Codec::BLOSC : return util::strcpy(dst, "BLOSC");// StrLen = 5 + 1 + End
5802  default: return util::strcpy(dst, "END");
5803  }
5804 }
5805 
5806 __hostdev__ inline Codec toCodec(const char *str)
5807 {
5808  if (util::streq(str, "none")) return Codec::NONE;
5809  if (util::streq(str, "zip")) return Codec::ZIP;
5810  if (util::streq(str, "blosc")) return Codec::BLOSC;
5811  return Codec::End;
5812 }
5813 
5814 /// @brief Data encoded at the head of each segment of a file or stream.
5815 ///
5816 /// @note A file or stream is composed of one or more segments that each contain
5817 // one or more grids.
5818 struct FileHeader {// 16 bytes
5819  uint64_t magic;// 8 bytes
5820  Version version;// 4 bytes version numbers
5821  uint16_t gridCount;// 2 bytes
5822  Codec codec;// 2 bytes
5823  bool isValid() const {return magic == NANOVDB_MAGIC_NUMB || magic == NANOVDB_MAGIC_FILE;}
5824 }; // FileHeader ( 16 bytes = 2 words )
5825 
5826 // @brief Data encoded for each of the grids associated with a segment.
5827 // Grid size in memory (uint64_t) |
5828 // Grid size on disk (uint64_t) |
5829 // Grid name hash key (uint64_t) |
5830 // Numer of active voxels (uint64_t) |
5831 // Grid type (uint32_t) |
5832 // Grid class (uint32_t) |
5833 // Characters in grid name (uint32_t) |
5834 // AABB in world space (2*3*double) | one per grid in file
5835 // AABB in index space (2*3*int) |
5836 // Size of a voxel in world units (3*double) |
5837 // Byte size of the grid name (uint32_t) |
5838 // Number of nodes per level (4*uint32_t) |
5839 // Numer of active tiles per level (3*uint32_t) |
5840 // Codec for file compression (uint16_t) |
5841 // Padding due to 8B alignment (uint16_t) |
5842 // Version number (uint32_t) |
5844 {// 176 bytes
5845  uint64_t gridSize, fileSize, nameKey, voxelCount; // 4 * 8 = 32B.
5848  Vec3dBBox worldBBox; // 2 * 3 * 8 = 48B.
5849  CoordBBox indexBBox; // 2 * 3 * 4 = 24B.
5850  Vec3d voxelSize; // 24B.
5851  uint32_t nameSize; // 4B.
5852  uint32_t nodeCount[4]; //4 x 4 = 16B
5853  uint32_t tileCount[3];// 3 x 4 = 12B
5854  Codec codec; // 2B
5855  uint16_t padding;// 2B, due to 8B alignment from uint64_t
5857 }; // FileMetaData
5858 
5859 // the following code block uses std and therefore needs to be ignored by CUDA and HIP
5860 #if !defined(__CUDA_ARCH__) && !defined(__HIP__)
5861 
5862 // Note that starting with version 32.6.0 it is possible to write and read raw grid buffers to
5863 // files, e.g. os.write((const char*)&buffer.data(), buffer.size()) or more conveniently as
5864 // handle.write(fileName). In addition to this simple approach we offer the methods below to
5865 // write traditional uncompressed nanovdb files that unlike raw files include metadata that
5866 // is used for tools like nanovdb_print.
5867 
5868 ///
5869 /// @brief This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h
5870 /// Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also
5871 /// works if client code only includes PNanoVDB.h!
5872 ///
5873 /// @details Writes a raw NanoVDB buffer, possibly with multiple grids, to a stream WITHOUT compression.
5874 /// It follows all the conventions in util/IO.h so the stream can be read by all existing client
5875 /// code of NanoVDB.
5876 ///
5877 /// @note This method will always write uncompressed grids to the stream, i.e. Blosc or ZIP compression
5878 /// is never applied! This is a fundamental limitation and feature of this standalone function.
5879 ///
5880 /// @throw std::invalid_argument if buffer does not point to a valid NanoVDB grid.
5881 ///
5882 /// @warning This is pretty ugly code that involves lots of pointer and bit manipulations - not for the faint of heart :)
5883 template<typename StreamT> // StreamT class must support: "void write(const char*, size_t)"
5884 void writeUncompressedGrid(StreamT& os, const GridData* gridData, bool raw = false)
5885 {
5886  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMB || gridData->mMagic == NANOVDB_MAGIC_GRID);
5887  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
5888  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
5889 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
5890  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u, Codec::NONE};
5891 #else
5892  FileHeader head{NANOVDB_MAGIC_NUMB, gridData->mVersion, 1u, Codec::NONE};
5893 #endif
5894  const char* gridName = gridData->gridName();
5895  const uint32_t nameSize = util::strlen(gridName) + 1;// include '\0'
5896  const TreeData* treeData = (const TreeData*)(gridData->treePtr());
5897  FileMetaData meta{gridData->mGridSize, gridData->mGridSize, 0u, treeData->mVoxelCount,
5898  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
5899  treeData->bbox(), gridData->mVoxelSize, nameSize,
5900  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
5901  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
5902  Codec::NONE, 0u, gridData->mVersion }; // FileMetaData
5903  os.write((const char*)&head, sizeof(FileHeader)); // write header
5904  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
5905  os.write(gridName, nameSize); // write grid name
5906  }
5907  os.write((const char*)gridData, gridData->mGridSize);// write the grid
5908 }// writeUncompressedGrid
5909 
5910 /// @brief write multiple NanoVDB grids to a single file, without compression.
5911 /// @note To write all grids in a single GridHandle simply use handle.write("fieNane")
5912 template<typename GridHandleT, template<typename...> class VecT>
5913 void writeUncompressedGrids(const char* fileName, const VecT<GridHandleT>& handles, bool raw = false)
5914 {
5915 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ofstream or FILE implementations
5916  std::ofstream os(fileName, std::ios::out | std::ios::binary | std::ios::trunc);
5917 #else
5918  struct StreamT {
5919  FILE* fptr;
5920  StreamT(const char* name) { fptr = fopen(name, "wb"); }
5921  ~StreamT() { fclose(fptr); }
5922  void write(const char* data, size_t n) { fwrite(data, 1, n, fptr); }
5923  bool is_open() const { return fptr != NULL; }
5924  } os(fileName);
5925 #endif
5926  if (!os.is_open()) {
5927  fprintf(stderr, "nanovdb::writeUncompressedGrids: Unable to open file \"%s\"for output\n", fileName);
5928  exit(EXIT_FAILURE);
5929  }
5930  for (auto& h : handles) {
5931  for (uint32_t n=0; n<h.gridCount(); ++n) writeUncompressedGrid(os, h.gridData(n), raw);
5932  }
5933 } // writeUncompressedGrids
5934 
5935 /// @brief read all uncompressed grids from a stream and return their handles.
5936 ///
5937 /// @throw std::invalid_argument if stream does not contain a single uncompressed valid NanoVDB grid
5938 ///
5939 /// @details StreamT class must support: "bool read(char*, size_t)" and "void skip(uint32_t)"
5940 template<typename GridHandleT, typename StreamT, template<typename...> class VecT>
5941 VecT<GridHandleT> readUncompressedGrids(StreamT& is, const typename GridHandleT::BufferType& pool = typename GridHandleT::BufferType())
5942 {
5943  VecT<GridHandleT> handles;
5944  GridData data;
5945  is.read((char*)&data, sizeof(GridData));
5946  if (data.isValid()) {// stream contains a raw grid buffer
5947  uint64_t size = data.mGridSize, sum = 0u;
5948  while(data.mGridIndex + 1u < data.mGridCount) {
5949  is.skip(data.mGridSize - sizeof(GridData));// skip grid
5950  is.read((char*)&data, sizeof(GridData));// read sizeof(GridData) bytes
5951  sum += data.mGridSize;
5952  }
5953  is.skip(-int64_t(sum + sizeof(GridData)));// rewind to start
5954  auto buffer = GridHandleT::BufferType::create(size + sum, &pool);
5955  is.read((char*)(buffer.data()), buffer.size());
5956  handles.emplace_back(std::move(buffer));
5957  } else {// Header0, MetaData0, gridName0, Grid0...HeaderN, MetaDataN, gridNameN, GridN
5958  is.skip(-sizeof(GridData));// rewind
5959  FileHeader head;
5960  while(is.read((char*)&head, sizeof(FileHeader))) {
5961  if (!head.isValid()) {
5962  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid magic number = \"%s\"\n", (const char*)&(head.magic));
5963  exit(EXIT_FAILURE);
5964  } else if (!head.version.isCompatible()) {
5965  char str[20];
5966  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid major version = \"%s\"\n", toStr(str, head.version));
5967  exit(EXIT_FAILURE);
5968  } else if (head.codec != Codec::NONE) {
5969  char str[8];
5970  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid codec = \"%s\"\n", toStr(str, head.codec));
5971  exit(EXIT_FAILURE);
5972  }
5973  FileMetaData meta;
5974  for (uint16_t i = 0; i < head.gridCount; ++i) { // read all grids in segment
5975  is.read((char*)&meta, sizeof(FileMetaData));// read meta data
5976  is.skip(meta.nameSize); // skip grid name
5977  auto buffer = GridHandleT::BufferType::create(meta.gridSize, &pool);
5978  is.read((char*)buffer.data(), meta.gridSize);// read grid
5979  handles.emplace_back(std::move(buffer));
5980  }// loop over grids in segment
5981  }// loop over segments
5982  }
5983  return handles;
5984 } // readUncompressedGrids
5985 
5986 /// @brief Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
5987 template<typename GridHandleT, template<typename...> class VecT>
5988 VecT<GridHandleT> readUncompressedGrids(const char* fileName, const typename GridHandleT::BufferType& buffer = typename GridHandleT::BufferType())
5989 {
5990 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ifstream or FILE implementations
5991  struct StreamT : public std::ifstream {
5992  StreamT(const char* name) : std::ifstream(name, std::ios::in | std::ios::binary){}
5993  void skip(int64_t off) { this->seekg(off, std::ios_base::cur); }
5994  };
5995 #else
5996  struct StreamT {
5997  FILE* fptr;
5998  StreamT(const char* name) { fptr = fopen(name, "rb"); }
5999  ~StreamT() { fclose(fptr); }
6000  bool read(char* data, size_t n) {
6001  size_t m = fread(data, 1, n, fptr);
6002  return n == m;
6003  }
6004  void skip(int64_t off) { fseek(fptr, (long int)off, SEEK_CUR); }
6005  bool is_open() const { return fptr != NULL; }
6006  };
6007 #endif
6008  StreamT is(fileName);
6009  if (!is.is_open()) {
6010  fprintf(stderr, "nanovdb::readUncompressedGrids: Unable to open file \"%s\"for input\n", fileName);
6011  exit(EXIT_FAILURE);
6012  }
6013  return readUncompressedGrids<GridHandleT, StreamT, VecT>(is, buffer);
6014 } // readUncompressedGrids
6015 
6016 #endif // if !defined(__CUDA_ARCH__) && !defined(__HIP__)
6017 
6018 } // namespace io
6019 
6020 // ----------------------------> Implementations of random access methods <--------------------------------------
6021 
6022 /**
6023 * @brief Below is an example of a struct used for random get methods.
6024 * @note All member methods, data, and types are mandatory.
6025 * @code
6026  template<typename BuildT>
6027  struct GetOpT {
6028  using Type = typename BuildToValueMap<BuildT>::Type;// return type
6029  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6030  __hostdev__ static Type get(const NanoRoot<BuildT>& root, args...) { }
6031  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, args...) { }
6032  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, args...) { }
6033  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, args...) { }
6034  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6035  };
6036  @endcode
6037 
6038  * @brief Below is an example of the struct used for random set methods
6039  * @note All member methods and data are mandatory.
6040  * @code
6041  template<typename BuildT>
6042  struct SetOpT {
6043  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6044  __hostdev__ static void set(NanoRoot<BuildT>& root, args...) { }
6045  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, args...) { }
6046  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, args...) { }
6047  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, args...) { }
6048  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6049  };
6050  @endcode
6051 **/
6052 
6053 /// @brief Implements Tree::getValue(math::Coord), i.e. return the value associated with a specific coordinate @c ijk.
6054 /// @tparam BuildT Build type of the grid being called
6055 /// @details The value at a coordinate either maps to the background, a tile value or a leaf value.
6056 template<typename BuildT>
6057 struct GetValue
6058 {
6060  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6061  __hostdev__ static Type get(const NanoRoot<BuildT>& root) { return root.mBackground; }
6062  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.value; }
6063  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6064  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6065  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.getValue(n); } // works with all build types
6066 }; // GetValue<BuildT>
6067 
6068 template<typename BuildT>
6069 struct SetValue
6070 {
6071  static_assert(!BuildTraits<BuildT>::is_special, "SetValue does not support special value types, e.g. Fp4, Fp8, Fp16, FpN");
6073  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6074  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6075  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, const ValueT& v) { tile.value = v; }
6076  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6077  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6078  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6079 }; // SetValue<BuildT>
6080 
6081 template<typename BuildT>
6082 struct SetVoxel
6083 {
6084  static_assert(!BuildTraits<BuildT>::is_special, "SetVoxel does not support special value types. e.g. Fp4, Fp8, Fp16, FpN");
6086  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6087  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6088  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile&, const ValueT&) {} // no-op
6089  __hostdev__ static void set(NanoUpper<BuildT>&, uint32_t, const ValueT&) {} // no-op
6090  __hostdev__ static void set(NanoLower<BuildT>&, uint32_t, const ValueT&) {} // no-op
6091  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6092 }; // SetVoxel<BuildT>
6093 
6094 /// @brief Implements Tree::isActive(math::Coord)
6095 /// @tparam BuildT Build type of the grid being called
6096 template<typename BuildT>
6097 struct GetState
6098 {
6099  using Type = bool;
6100  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6101  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return false; }
6102  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.state > 0; }
6103  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6104  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6105  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.mValueMask.isOn(n); }
6106 }; // GetState<BuildT>
6107 
6108 /// @brief Implements Tree::getDim(math::Coord)
6109 /// @tparam BuildT Build type of the grid being called
6110 template<typename BuildT>
6111 struct GetDim
6112 {
6113  using Type = uint32_t;
6114  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6115  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return 0u; } // background
6116  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return 4096u; }
6117  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return 128u; }
6118  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return 8u; }
6119  __hostdev__ static Type get(const NanoLeaf<BuildT>&, uint32_t) { return 1u; }
6120 }; // GetDim<BuildT>
6121 
6122 /// @brief Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord)
6123 /// @tparam BuildT Build type of the grid being called
6124 template<typename BuildT>
6125 struct GetLeaf
6126 {
6127  using Type = const NanoLeaf<BuildT>*;
6128  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6129  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6130  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6131  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6132  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return nullptr; }
6133  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t) { return &leaf; }
6134 }; // GetLeaf<BuildT>
6135 
6136 /// @brief Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates
6137 /// @tparam BuildT Build type of the grid being called
6138 template<typename BuildT>
6139 struct GetLower
6140 {
6141  using Type = const NanoLower<BuildT>*;
6142  static constexpr int LEVEL = 1;// minimum level for the descent during top-down traversal
6143  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6144  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6145  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6146  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t) { return &node; }
6147 }; // GetLower<BuildT>
6148 
6149 /// @brief Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates
6150 /// @tparam BuildT Build type of the grid being called
6151 template<typename BuildT>
6152 struct GetUpper
6153 {
6154  using Type = const NanoUpper<BuildT>*;
6155  static constexpr int LEVEL = 2;// minimum level for the descent during top-down traversal
6156  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6157  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6158  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t) { return &node; }
6159 }; // GetUpper<BuildT>
6160 
6161 /// @brief Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates
6162 /// @tparam BuildT Build type of the grid being called
6163 template<typename BuildT>
6164 struct GetTile
6165 {
6166  using Type = const typename NanoRoot<BuildT>::Tile*;
6167  static constexpr int LEVEL = 3;// minimum level for the descent during top-down traversal
6168  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6169  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile &tile) { return &tile; }
6170 }; // GetTile<BuildT>
6171 
6172 /// @brief Implements Tree::probeLeaf(math::Coord)
6173 /// @tparam BuildT Build type of the grid being called
6174 template<typename BuildT>
6175 struct ProbeValue
6176 {
6177  using Type = bool;
6178  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6180  __hostdev__ static Type get(const NanoRoot<BuildT>& root, ValueT& v)
6181  {
6182  v = root.mBackground;
6183  return false;
6184  }
6185  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, ValueT& v)
6186  {
6187  v = tile.value;
6188  return tile.state > 0u;
6189  }
6190  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, ValueT& v)
6191  {
6192  v = node.mTable[n].value;
6193  return node.mValueMask.isOn(n);
6194  }
6195  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, ValueT& v)
6196  {
6197  v = node.mTable[n].value;
6198  return node.mValueMask.isOn(n);
6199  }
6200  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, ValueT& v)
6201  {
6202  v = leaf.getValue(n);
6203  return leaf.mValueMask.isOn(n);
6204  }
6205 }; // ProbeValue<BuildT>
6206 
6207 /// @brief Implements Tree::getNodeInfo(math::Coord)
6208 /// @tparam BuildT Build type of the grid being called
6209 template<typename BuildT>
6210 struct GetNodeInfo
6211 {
6214  struct NodeInfo
6215  {
6216  uint32_t level, dim;
6217  ValueType minimum, maximum;
6218  FloatType average, stdDevi;
6219  CoordBBox bbox;
6220  };
6221  static constexpr int LEVEL = 0;
6222  using Type = NodeInfo;
6223  __hostdev__ static Type get(const NanoRoot<BuildT>& root)
6224  {
6225  return NodeInfo{3u, NanoUpper<BuildT>::DIM, root.minimum(), root.maximum(), root.average(), root.stdDeviation(), root.bbox()};
6226  }
6227  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile)
6228  {
6229  return NodeInfo{3u, NanoUpper<BuildT>::DIM, tile.value, tile.value, static_cast<FloatType>(tile.value), 0, CoordBBox::createCube(tile.origin(), NanoUpper<BuildT>::DIM)};
6230  }
6231  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n)
6232  {
6233  return NodeInfo{2u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6234  }
6235  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n)
6236  {
6237  return NodeInfo{1u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6238  }
6239  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n)
6240  {
6241  return NodeInfo{0u, leaf.dim(), leaf.minimum(), leaf.maximum(), leaf.average(), leaf.stdDeviation(), leaf.bbox()};
6242  }
6243 }; // GetNodeInfo<BuildT>
6244 
6245 } // namespace nanovdb ===================================================================
6246 
6247 #endif // end of NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
typename FloatTraits< BuildT >::FloatType FloatType
Definition: NanoVDB.h:3623
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:3658
__hostdev__ ValueOffIterator beginValueOff() const
Definition: NanoVDB.h:4299
__hostdev__ DenseIter()
Definition: NanoVDB.h:2955
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:2228
__hostdev__ bool probeValue(const math::Coord &ijk, typename util::remove_const< ChannelT >::type &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:5747
__hostdev__ ValueT value() const
Definition: NanoVDB.h:2718
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3778
typename BuildT::RootType RootType
Definition: NanoVDB.h:2103
__hostdev__ const Vec3d & voxelSize() const
Return a const reference to the size of a voxel in world units.
Definition: NanoVDB.h:2163
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5337
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1075
ValueT ValueType
Definition: NanoVDB.h:4906
__hostdev__ uint64_t full() const
Definition: NanoVDB.h:1827
__hostdev__ const char * shortGridName() const
Return a c-string with the name of this grid, truncated to 255 characters.
Definition: NanoVDB.h:2262
__hostdev__ util::enable_if<!util::is_same< MaskT, Mask >::value, Mask & >::type operator=(const MaskT &other)
Assignment operator that works with openvdb::util::NodeMask.
Definition: NanoVDB.h:1172
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3010
bool type
Definition: NanoVDB.h:494
Visits all tile values in this node, i.e. both inactive and active tiles.
Definition: NanoVDB.h:3311
__hostdev__ math::BBox< CoordT > bbox() const
Return the bounding box in index space of active values in this leaf node.
Definition: NanoVDB.h:4412
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4326
uint16_t ArrayType
Definition: NanoVDB.h:4153
__hostdev__ CheckMode toCheckMode(const Checksum &checksum)
Maps 64 bit checksum to CheckMode enum.
Definition: NanoVDB.h:1866
C++11 implementation of std::enable_if.
Definition: Util.h:335
FloatType mStdDevi
Definition: NanoVDB.h:3635
float type
Definition: NanoVDB.h:501
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3874
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5343
__hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:4403
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3998
__hostdev__ bool isEmpty() const
Definition: NanoVDB.h:5535
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4848
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:4368
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:5512
typename util::match_const< DataType, RootT >::type DataT
Definition: NanoVDB.h:2830
void writeUncompressedGrids(const char *fileName, const VecT< GridHandleT > &handles, bool raw=false)
write multiple NanoVDB grids to a single file, without compression.
Definition: NanoVDB.h:5913
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2407
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:3035
Definition: NanoVDB.h:5843
__hostdev__ Vec3d getVoxelSize() const
Return a voxels size in each coordinate direction, measured at the origin.
Definition: NanoVDB.h:1511
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4819
StatsT mStdDevi
Definition: NanoVDB.h:3155
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:2242
__hostdev__ const Vec3dBBox & worldBBox() const
Definition: NanoVDB.h:5526
__hostdev__ Vec3T applyMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1978
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findFirst() const
Definition: NanoVDB.h:1333
__hostdev__ TileT * tile() const
Definition: NanoVDB.h:2839
__hostdev__ bool isOff(uint32_t n) const
Return true if the given bit is NOT set.
Definition: NanoVDB.h:1201
DataType::template TileIter< DataT > mTileIter
Definition: NanoVDB.h:2832
__hostdev__ Vec3T applyMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1989
__hostdev__ const char * gridName() const
Definition: NanoVDB.h:2046
__hostdev__ ChannelT * setChannel(ChannelT *channelPtr)
Change to an external channel.
Definition: NanoVDB.h:5726
GridBlindDataClass mDataClass
Definition: NanoVDB.h:1556
typename util::match_const< Tile, RootT >::type TileT
Definition: NanoVDB.h:2831
__hostdev__ ChildT * getChild(uint32_t n)
Returns a pointer to the child node at the specifed linear offset.
Definition: NanoVDB.h:3183
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5339
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1508
VDB Tree, which is a thin wrapper around a RootNode.
Definition: NanoVDB.h:2394
__hostdev__ Vec3T applyMapF(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1439
decltype(mFlags) Type
Definition: NanoVDB.h:926
__hostdev__ Vec3T indexToWorld(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2174
math::BBox< CoordType > BBoxType
Definition: NanoVDB.h:2819
__hostdev__ Tile * tile(uint32_t n)
Definition: NanoVDB.h:2651
__hostdev__ DenseIter operator++(int)
Definition: NanoVDB.h:2969
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2893
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5340
__hostdev__ GridClass mapToGridClass(GridClass defaultClass=GridClass::Unknown)
Definition: NanoVDB.h:889
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2633
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:2439
__hostdev__ ValueIterator()
Definition: NanoVDB.h:4309
float Type
Definition: NanoVDB.h:521
float FloatType
Definition: NanoVDB.h:3703
__hostdev__ CoordT origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:4388
Highest level of the data structure. Contains a tree and a world->index transform (that currently onl...
Definition: NanoVDB.h:2099
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4825
__hostdev__ ValueOnIter(RootT *parent)
Definition: NanoVDB.h:2922
Vec3dBBox mWorldBBox
Definition: NanoVDB.h:1906
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3296
__hostdev__ const NodeTrait< RootT, 1 >::type * getFirstLower() const
Definition: NanoVDB.h:2533
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1997
FloatType stdDevi
Definition: NanoVDB.h:6218
__hostdev__ char * toStr(char *dst, GridType gridType)
Maps a GridType to a c-string.
Definition: NanoVDB.h:254
__hostdev__ ValueType maximum() const
Return a const reference to the maximum active value encoded in this leaf node.
Definition: NanoVDB.h:4374
__hostdev__ DenseIterator(const InternalNode *parent)
Definition: NanoVDB.h:3395
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4127
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2994
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this internal node.
Definition: NanoVDB.h:3445
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5135
#define NANOVDB_PATCH_VERSION_NUMBER
Definition: NanoVDB.h:148
__hostdev__ void init(std::initializer_list< GridFlags > list={GridFlags::IsBreadthFirst}, uint64_t gridSize=0u, const Map &map=Map(), GridType gridType=GridType::Unknown, GridClass gridClass=GridClass::Unknown)
Definition: NanoVDB.h:1918
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4957
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3777
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:4005
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2703
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &v)
Sets the value at the specified location but leaves its state unchanged.
Definition: NanoVDB.h:4458
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1465
__hostdev__ ValueOnIter()
Definition: NanoVDB.h:2921
Class to access values in channels at a specific voxel location.
Definition: NanoVDB.h:5674
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4140
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3656
Definition: NanoVDB.h:2658
static __hostdev__ uint32_t padding()
Definition: NanoVDB.h:4428
typename GridT::TreeType Type
Definition: NanoVDB.h:2380
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:2859
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5338
char mGridName[MaxNameSize]
Definition: NanoVDB.h:1904
__hostdev__ bool operator>(const Version &rhs) const
Definition: NanoVDB.h:698
static __hostdev__ size_t memUsage(uint32_t bitWidth)
Definition: NanoVDB.h:3882
__hostdev__ void setChild(const CoordType &k, const void *ptr, const RootData *data)
Definition: NanoVDB.h:2619
__hostdev__ Version version() const
Definition: NanoVDB.h:2121
PointAccessor(const NanoGrid< Point > &grid)
Definition: NanoVDB.h:5613
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:2784
const GridBlindMetaData & operator=(const GridBlindMetaData &rhs)
Copy assignment operator that resets mDataOffset and copies mName.
Definition: NanoVDB.h:1599
__hostdev__ ValueType getValue(uint32_t i) const
Definition: NanoVDB.h:3649
__hostdev__ Map(double s, const Vec3d &t=Vec3d(0.0, 0.0, 0.0))
Definition: NanoVDB.h:1399
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:3494
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:3250
__hostdev__ void setLongGridNameOn(bool on=true)
Definition: NanoVDB.h:1967
__hostdev__ Mask(const Mask &other)
Copy constructor.
Definition: NanoVDB.h:1145
static __hostdev__ uint32_t CoordToOffset(const CoordT &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:4486
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4102
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4126
__hostdev__ const BlindDataT * getBlindData(uint32_t n) const
Definition: NanoVDB.h:2292
#define NANOVDB_MAGIC_NUMB
Definition: NanoVDB.h:139
__hostdev__ void setWord(WordT w, uint32_t n)
Definition: NanoVDB.h:1163
GridClass
Classes (superset of OpenVDB) that are currently supported by NanoVDB.
Definition: NanoVDB.h:291
typename DataType::ValueT ValueType
Definition: NanoVDB.h:2814
uint64_t magic
Definition: NanoVDB.h:5819
__hostdev__ bool isPartial() const
return true if the 64 bit checksum is partial, i.e. of head only
Definition: NanoVDB.h:1836
static T scalar(const T &s)
Definition: NanoVDB.h:733
typename RootT::BuildType BuildType
Definition: NanoVDB.h:2409
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4015
Definition: NanoVDB.h:2881
__hostdev__ void * treePtr()
Definition: NanoVDB.h:2000
uint32_t state
Definition: NanoVDB.h:2639
BuildT BuildType
Definition: NanoVDB.h:3622
__hostdev__ void setDev(const FloatType &v)
Definition: NanoVDB.h:3673
__hostdev__ ConstTileIterator cbeginTile() const
Definition: NanoVDB.h:2729
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2406
Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord...
Definition: NanoVDB.h:1755
__hostdev__ bool getDev() const
Definition: NanoVDB.h:4009
__hostdev__ bool isValid(GridType gridType, GridClass gridClass)
return true if the combination of GridType and GridClass is valid.
Definition: NanoVDB.h:608
static __hostdev__ bool isAligned(const void *p)
return true if the specified pointer is 32 byte aligned
Definition: NanoVDB.h:542
__hostdev__ void * getRoot()
Get a non-const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2356
__hostdev__ const CoordBBox & indexBBox() const
Definition: NanoVDB.h:5527
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:3307
uint8_t mFlags
Definition: NanoVDB.h:3629
__hostdev__ TileT * operator->() const
Definition: NanoVDB.h:2688
__hostdev__ LeafNodeType * getFirstLeaf()
Template specializations of getFirstNode.
Definition: NanoVDB.h:2530
uint64_t mOffset
Definition: NanoVDB.h:4161
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3679
__hostdev__ ValueIter(RootT *parent)
Definition: NanoVDB.h:2888
static int64_t PtrDiff(const void *p, const void *q)
Compute the distance, in bytes, between two pointers, dist = p - q.
Definition: Util.h:464
__hostdev__ uint32_t gridIndex() const
Return index of this grid in the buffer.
Definition: NanoVDB.h:2134
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:2433
__hostdev__ bool isEmpty() const
Return true if the root is empty, i.e. has not child nodes or constant tiles.
Definition: NanoVDB.h:2365
Definition: NanoVDB.h:2089
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:2786
LeafNodeType Node0
Definition: NanoVDB.h:2416
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3034
Checksum mChecksum
Definition: NanoVDB.h:1898
Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6152
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:2229
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5577
typename DataType::StatsT FloatType
Definition: NanoVDB.h:2815
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this root node and any of its child nodes...
Definition: NanoVDB.h:3019
Below is an example of a struct used for random get methods.
Definition: NanoVDB.h:1745
BitFlags()
Definition: NanoVDB.h:927
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:3660
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:2877
__hostdev__ bool isActive(const CoordT &ijk) const
Return true if the voxel value at the given coordinate is active.
Definition: NanoVDB.h:4462
__hostdev__ ConstDenseIterator cbeginDense() const
Definition: NanoVDB.h:2981
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4107
ChildT ChildNodeType
Definition: NanoVDB.h:2808
#define NANOVDB_MAGIC_GRID
Definition: NanoVDB.h:140
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4014
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:3380
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this leaf node.
Definition: NanoVDB.h:4367
void set(const MatT &mat, const MatT &invMat, const Vec3T &translate, double taper=1.0)
Initialize the member data from 3x3 or 4x4 matrices.
Definition: NanoVDB.h:1515
static __hostdev__ KeyT CoordToKey(const CoordType &ijk)
Definition: NanoVDB.h:2579
__hostdev__ void setAvg(float avg)
Definition: NanoVDB.h:3753
MaskT< LOG2DIM > ArrayType
Definition: NanoVDB.h:3939
T Type
Definition: NanoVDB.h:458
__hostdev__ bool isActive() const
Definition: NanoVDB.h:3339
uint64_t mMagic
Definition: NanoVDB.h:1897
__hostdev__ ChannelT * setChannel(uint32_t channelID)
Change to an internal channel, assuming it exists as as blind data in the IndexGrid.
Definition: NanoVDB.h:5732
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4013
__hostdev__ bool isOff() const
Return true if none of the bits are set in this Mask.
Definition: NanoVDB.h:1213
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:5511
__hostdev__ uint32_t valueCount() const
Definition: NanoVDB.h:4098
uint64_t mGridSize
Definition: NanoVDB.h:1903
__hostdev__ NodeT * probeChild(ValueType &value) const
Definition: NanoVDB.h:2957
RootT Node3
Definition: NanoVDB.h:2413
PointType
Definition: NanoVDB.h:396
__hostdev__ void toggle(uint32_t n)
Definition: NanoVDB.h:1296
Trait to map from LEVEL to node type.
Definition: NanoVDB.h:4619
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4056
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:2789
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4057
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:2231
__hostdev__ ValueIter()
Definition: NanoVDB.h:2887
__hostdev__ const char * shortGridName() const
Definition: NanoVDB.h:5524
#define NANOVDB_MINOR_VERSION_NUMBER
Definition: NanoVDB.h:147
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4926
__hostdev__ WordT getWord(uint32_t n) const
Definition: NanoVDB.h:1156
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this leaf node.
Definition: NanoVDB.h:4380
uint64_t KeyT
Return a key based on the coordinates of a voxel.
Definition: NanoVDB.h:2577
Vec3d mVoxelSize
Definition: NanoVDB.h:1907
BuildT ValueType
Definition: NanoVDB.h:3621
uint64_t mFlags
Definition: NanoVDB.h:3148
__hostdev__ const uint32_t & getTableSize() const
Definition: NanoVDB.h:3007
int64_t mDataOffset
Definition: NanoVDB.h:1552
__hostdev__ ValueIterator()
Definition: NanoVDB.h:3317
__hostdev__ Checksum(uint32_t head, uint32_t tail)
Constructor that allows the two 32bit checksums to be initiated explicitly.
Definition: NanoVDB.h:1809
GridBlindMetaData()
Empty constructor.
Definition: NanoVDB.h:1562
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1104
__hostdev__ Mask()
Initialize all bits to zero.
Definition: NanoVDB.h:1132
__hostdev__ bool isCached2(const CoordType &ijk) const
Definition: NanoVDB.h:5117
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3489
Implements Tree::getNodeInfo(math::Coord)
Definition: NanoVDB.h:1759
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4104
__hostdev__ void setStdDeviationOn(bool on=true)
Definition: NanoVDB.h:1969
uint64_t voxelCount
Definition: NanoVDB.h:5845
__hostdev__ uint32_t gridCount() const
Return total number of grids in the buffer.
Definition: NanoVDB.h:2137
__hostdev__ bool isRootConnected() const
return true if RootData follows TreeData in memory without any extra padding
Definition: NanoVDB.h:2084
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5589
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4846
__hostdev__ bool isValid() const
Methods related to the classification of this grid.
Definition: NanoVDB.h:2227
__hostdev__ void setValue(uint32_t n, const ValueT &v)
Definition: NanoVDB.h:3176
ValueType minimum
Definition: NanoVDB.h:6217
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2990
ChildT UpperNodeType
Definition: NanoVDB.h:2811
uint32_t mGridCount
Definition: NanoVDB.h:1902
CoordT mBBoxMin
Definition: NanoVDB.h:3627
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:5507
__hostdev__ bool isActive() const
Definition: NanoVDB.h:4331
uint64_t FloatType
Definition: NanoVDB.h:776
__hostdev__ const void * nodePtr() const
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2008
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6072
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:3661
__hostdev__ FloatType stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4383
Definition: NanoVDB.h:6214
__hostdev__ bool isValid() const
return true if the magic number and the version are both valid
Definition: NanoVDB.h:1952
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1703
uint64_t FloatType
Definition: NanoVDB.h:770
__hostdev__ RootT & root()
Definition: NanoVDB.h:2431
float mQuantum
Definition: NanoVDB.h:3711
char * strcpy(char *dst, const char *src)
Copy characters from src to dst.
Definition: Util.h:166
double FloatType
Definition: NanoVDB.h:800
static const int MaxNameSize
Definition: NanoVDB.h:1551
__hostdev__ bool isIndex(GridType gridType)
Return true if the GridType maps to a special index type (not a POD integer type).
Definition: NanoVDB.h:597
Map mMap
Definition: NanoVDB.h:1905
#define NANOVDB_MAGIC_FILE
Definition: NanoVDB.h:141
__hostdev__ ValueType minimum() const
Return a const reference to the minimum active value encoded in this leaf node.
Definition: NanoVDB.h:4371
float type
Definition: NanoVDB.h:515
__hostdev__ const uint32_t & tileCount() const
Return the number of tiles encoded in this root node.
Definition: NanoVDB.h:3006
__hostdev__ Vec3T applyJacobian(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1448
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:5516
Utility functions.
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2813
Bit-compacted representation of all three version numbers.
Definition: NanoVDB.h:673
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4081
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:5505
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:5510
__hostdev__ util::enable_if< BuildTraits< T >::is_index, const uint64_t & >::type valueCount() const
Return the total number of values indexed by this IndexGrid.
Definition: NanoVDB.h:2144
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4840
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, ChannelT *channelPtr)
Ctor from an IndexGrid and an external channel.
Definition: NanoVDB.h:5700
__hostdev__ bool operator>=(const Version &rhs) const
Definition: NanoVDB.h:699
typename DataType::ValueT ValueType
Definition: NanoVDB.h:3245
typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1687
static __hostdev__ uint32_t dim()
Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!) ...
Definition: NanoVDB.h:4409
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5268
typename NanoLeaf< BuildT >::ValueType Type
Definition: NanoVDB.h:6059
Definition: NanoVDB.h:3136
static __hostdev__ uint32_t dim()
Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32) ...
Definition: NanoVDB.h:3439
__hostdev__ bool operator==(const Mask &other) const
Definition: NanoVDB.h:1186
__hostdev__ uint32_t gridIndex() const
Definition: NanoVDB.h:5522
__hostdev__ ChildIter & operator++()
Definition: NanoVDB.h:2860
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2708
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4196
GridBlindMetaData(const GridBlindMetaData &other)
Copy constructor that resets mDataOffset and zeros out mName.
Definition: NanoVDB.h:1585
__hostdev__ TileIterator probe(const CoordT &ijk)
Definition: NanoVDB.h:2731
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4011
__hostdev__ const ChildT * probeChild(ValueType &value) const
Definition: NanoVDB.h:3401
__hostdev__ bool operator==(const Checksum &rhs) const
return true if the checksums are identical
Definition: NanoVDB.h:1856
__hostdev__ const NanoGrid< BuildT > & grid() const
Definition: NanoVDB.h:5563
char * strncpy(char *dst, const char *src, size_t max)
Copies the first num characters of src to dst. If the end of the source C string (which is signaled b...
Definition: Util.h:185
__hostdev__ DenseIterator beginAll() const
Definition: NanoVDB.h:1129
__hostdev__ ConstValueIterator cbeginValueAll() const
Definition: NanoVDB.h:2912
__hostdev__ bool isValid() const
return true if this meta data has a valid combination of semantic, class and value tags ...
Definition: NanoVDB.h:1641
__hostdev__ void disable()
Definition: NanoVDB.h:1845
__hostdev__ const NanoGrid< IndexT > & grid() const
Return a const reference to the IndexGrid.
Definition: NanoVDB.h:5713
static constexpr uint32_t SIZE
Definition: NanoVDB.h:1030
uint32_t mNodeCount[3]
Definition: NanoVDB.h:2345
ValueType mMaximum
Definition: NanoVDB.h:3633
typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1688
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3995
__hostdev__ uint64_t blindDataSize() const
return size in bytes of the blind data represented by this blind meta data
Definition: NanoVDB.h:1669
static __hostdev__ CoordT KeyToCoord(const KeyT &key)
Definition: NanoVDB.h:2587
__hostdev__ const Map & map() const
Return a const reference to the Map for this grid.
Definition: NanoVDB.h:2166
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:4351
__hostdev__ void setRoot(const void *root)
Definition: NanoVDB.h:2350
__hostdev__ BaseIter()
Definition: NanoVDB.h:2833
static __hostdev__ uint32_t wordCount()
Return the number of machine words used by this Mask.
Definition: NanoVDB.h:1040
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:5517
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1103
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3969
typename DataType::BuildT BuildType
Definition: NanoVDB.h:2816
__hostdev__ void setMin(const bool &)
Definition: NanoVDB.h:3963
__hostdev__ bool isValid() const
Definition: NanoVDB.h:5504
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:2785
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4053
__hostdev__ uint32_t tail() const
Definition: NanoVDB.h:1831
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:3955
__hostdev__ bool updateBBox()
Updates the local bounding box of active voxels in this node. Return true if bbox was updated...
Definition: NanoVDB.h:4566
__hostdev__ DenseIter & operator++()
Definition: NanoVDB.h:2964
Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6164
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:2243
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:2235
__hostdev__ uint64_t last(uint32_t i) const
Definition: NanoVDB.h:4178
bool FloatType
Definition: NanoVDB.h:764
__hostdev__ TileT & operator*() const
Definition: NanoVDB.h:2683
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this internal node and an...
Definition: NanoVDB.h:3462
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:1078
Definition: NanoVDB.h:750
#define __hostdev__
Definition: Util.h:73
__hostdev__ const Checksum & checksum() const
Definition: NanoVDB.h:5533
typename DataType::FloatType FloatType
Definition: NanoVDB.h:4227
#define NANOVDB_DATA_ALIGNMENT
Definition: NanoVDB.h:133
typename DataType::Tile Tile
Definition: NanoVDB.h:2821
__hostdev__ bool isValid(const GridBlindDataClass &blindClass, const GridBlindDataSemantic &blindSemantics, const GridType &blindType)
return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid...
Definition: NanoVDB.h:632
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:5520
__hostdev__ DenseIterator operator++(int)
Definition: NanoVDB.h:1111
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4087
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:3225
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:5514
Definition: NanoVDB.h:920
Coord CoordType
Definition: NanoVDB.h:4229
Dummy type for a 16bit quantization of float point values.
Definition: NanoVDB.h:196
uint8_t ArrayType
Definition: NanoVDB.h:3810
typename Mask< Log2Dim >::template Iterator< On > MaskIterT
Definition: NanoVDB.h:3255
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:2240
__hostdev__ TreeT & tree()
Return a non-const reference to the tree.
Definition: NanoVDB.h:2157
CoordT mBBoxMin
Definition: NanoVDB.h:4156
__hostdev__ void setFirstNode(const NodeT *node)
Definition: NanoVDB.h:2362
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1723
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3459
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3785
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3167
__hostdev__ void setMin(float min)
Definition: NanoVDB.h:3747
__hostdev__ void setValue(uint32_t offset, bool)
Definition: NanoVDB.h:4010
__hostdev__ void setMax(const ValueType &v)
Definition: NanoVDB.h:3671
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4018
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3641
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2838
__hostdev__ ChildIter()
Definition: NanoVDB.h:3275
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4193
__hostdev__ BlindDataT * getBlindData(uint32_t n)
Definition: NanoVDB.h:2299
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:3227
__hostdev__ ValueType getLastValue() const
If the last entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getLastValue() on the child.
Definition: NanoVDB.h:3482
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5067
__hostdev__ bool isOn(uint32_t n) const
Return true if the given bit is set.
Definition: NanoVDB.h:1198
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3883
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1488
uint64_t ValueType
Definition: NanoVDB.h:4035
uint16_t ArrayType
Definition: NanoVDB.h:3840
__hostdev__ const MaskType< LOG2DIM > & childMask() const
Return a const reference to the bit mask of child nodes in this internal node.
Definition: NanoVDB.h:3449
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4054
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4055
ValueT value
Definition: NanoVDB.h:2640
__hostdev__ void setDev(float dev)
Definition: NanoVDB.h:3756
Node caching at all (three) tree levels.
Definition: NanoVDB.h:5219
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:2791
__hostdev__ OnIterator beginOn() const
Definition: NanoVDB.h:1125
Definition: NanoVDB.h:1747
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4195
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5342
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4186
bool Type
Definition: NanoVDB.h:6099
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1702
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4139
BuildT BuildType
Definition: NanoVDB.h:5243
Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode) ...
Definition: NanoVDB.h:3617
__hostdev__ const BBoxType & bbox() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2997
GridBlindDataSemantic
Blind-data Semantics that are currently understood by NanoVDB.
Definition: NanoVDB.h:419
Version mVersion
Definition: NanoVDB.h:1899
__hostdev__ void setAverageOn(bool on=true)
Definition: NanoVDB.h:1968
__hostdev__ bool isSequential() const
return true if nodes at all levels can safely be accessed with simple linear offsets ...
Definition: NanoVDB.h:2256
__hostdev__ Map()
Default constructor for the identity map.
Definition: NanoVDB.h:1388
GridFlags
Grid flags which indicate what extra information is present in the grid buffer.
Definition: NanoVDB.h:328
Metafunction used to determine if the first template parameter is a specialization of the class templ...
Definition: Util.h:451
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3851
__hostdev__ uint32_t & checksum(int i)
Definition: NanoVDB.h:1823
__hostdev__ DenseIterator()
Definition: NanoVDB.h:3390
uint32_t nameSize
Definition: NanoVDB.h:5851
ReadAccessor< ValueT, LEVEL0, LEVEL1, LEVEL2 > createAccessor(const NanoGrid< ValueT > &grid)
Free-standing function for convenient creation of a ReadAccessor with optional and customizable node ...
Definition: NanoVDB.h:5437
RootT RootType
Definition: NanoVDB.h:2403
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3721
Definition: GridHandle.h:27
float type
Definition: NanoVDB.h:508
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Definition: NanoVDB.h:5531
CoordBBox bbox
Definition: NanoVDB.h:6219
float Type
Definition: NanoVDB.h:514
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Return true if this tree is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2448
Visits all tile values and child nodes of this node.
Definition: NanoVDB.h:3384
GridType mGridType
Definition: NanoVDB.h:1909
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4138
__hostdev__ uint64_t gridSize() const
Definition: NanoVDB.h:5521
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:4932
Definition: NanoVDB.h:1061
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3491
GridType gridType
Definition: NanoVDB.h:5846
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4045
Define static boolean tests for template build types.
Definition: NanoVDB.h:435
__hostdev__ bool isFull() const
return true if the 64 bit checksum is fill, i.e. of both had and nodes
Definition: NanoVDB.h:1840
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:2238
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:2878
char * sprint(char *dst, T var1, Types...var2)
prints a variable number of string and/or numbers to a destination string
Definition: Util.h:286
Bit-mask to encode active states and facilitate sequential iterators and a fast codec for I/O compres...
Definition: NanoVDB.h:1027
CoordT CoordType
Definition: NanoVDB.h:4907
__hostdev__ const GridBlindMetaData * blindMetaData(uint32_t n) const
Returns a const reference to the blindMetaData at the specified linear offset.
Definition: NanoVDB.h:2040
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4952
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3209
static ElementType scalar(const T &v)
Definition: NanoVDB.h:744
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:3346
__hostdev__ void setMax(float max)
Definition: NanoVDB.h:3750
__hostdev__ TileIter()
Definition: NanoVDB.h:2666
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3784
__hostdev__ bool getMax() const
Definition: NanoVDB.h:3954
Index64 memUsage(const TreeT &tree, bool threaded=true)
Return the total amount of memory in bytes occupied by this tree.
Definition: Count.h:493
uint64_t mData2
Definition: NanoVDB.h:1914
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:2569
float mMinimum
Definition: NanoVDB.h:3710
__hostdev__ uint64_t offset() const
Definition: NanoVDB.h:4175
static __hostdev__ Coord OffsetToLocalCoord(uint32_t n)
Definition: NanoVDB.h:3514
__hostdev__ const Vec3d & voxelSize() const
Return a vector of the axial voxel sizes.
Definition: NanoVDB.h:5719
__hostdev__ constexpr uint32_t strlen()
return the number of characters (including null termination) required to convert enum type to a strin...
Definition: NanoVDB.h:210
typename NanoLeaf< BuildT >::FloatType FloatType
Definition: NanoVDB.h:6213
Definition: NanoVDB.h:4031
KeyT key
Definition: NanoVDB.h:2637
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2698
uint64_t FloatType
Definition: NanoVDB.h:782
__hostdev__ uint64_t pointCount() const
Definition: NanoVDB.h:4176
typename DataType::StatsT FloatType
Definition: NanoVDB.h:3246
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5129
__hostdev__ uint32_t & tail()
Definition: NanoVDB.h:1832
bool ValueType
Definition: NanoVDB.h:3936
__hostdev__ Tile * probeTile(const CoordT &ijk)
Definition: NanoVDB.h:2747
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4085
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4852
__hostdev__ uint64_t checksum() const
return the 64 bit checksum of this instance
Definition: NanoVDB.h:1821
Dummy type for a voxel whose value equals an offset into an external value array of active values...
Definition: NanoVDB.h:175
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:4266
Top-most node of the VDB tree structure.
Definition: NanoVDB.h:2804
int64_t child
Definition: NanoVDB.h:3139
#define NANOVDB_MAJOR_VERSION_NUMBER
Definition: NanoVDB.h:146
__hostdev__ Vec3T applyJacobianF(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1457
uint8_t ArrayType
Definition: NanoVDB.h:3773
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3645
Struct to derive node type from its level in a given grid, tree or root while preserving constness...
Definition: NanoVDB.h:1680
typename GridT::TreeType type
Definition: NanoVDB.h:2381
__hostdev__ Codec toCodec(const char *str)
Definition: NanoVDB.h:5806
Definition: IndexIterator.h:43
Definition: NanoVDB.h:2845
uint32_t level
Definition: NanoVDB.h:6216
uint16_t padding
Definition: NanoVDB.h:5855
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3821
uint32_t mTileCount[3]
Definition: NanoVDB.h:2346
typename RootT::ChildNodeType Node2
Definition: NanoVDB.h:2414
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3372
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3456
ValueType mMinimum
Definition: NanoVDB.h:3632
__hostdev__ const void * blindData(uint32_t n) const
Returns a const pointer to the blindData at the specified linear offset.
Definition: NanoVDB.h:2284
__hostdev__ GridType toGridType()
Maps from a templated build type to a GridType enum.
Definition: NanoVDB.h:807
size_t strlen(const char *str)
length of a c-sting, excluding &#39;\0&#39;.
Definition: Util.h:153
MatType scale(const Vec3< typename MatType::value_type > &s)
Return a matrix that scales by s.
Definition: Mat.h:615
static __hostdev__ uint32_t dim()
Definition: NanoVDB.h:4222
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:5329
uint64_t Type
Definition: NanoVDB.h:465
uint64_t type
Definition: NanoVDB.h:473
const typename NanoRoot< BuildT >::Tile * Type
Definition: NanoVDB.h:6166
__hostdev__ float getDev() const
return the quantized standard deviation of the active values in this node
Definition: NanoVDB.h:3744
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3367
ValueT mMaximum
Definition: NanoVDB.h:3153
__hostdev__ uint64_t idx(int i, int j, int k) const
Definition: NanoVDB.h:5739
static __hostdev__ CoordT OffsetToLocalCoord(uint32_t n)
Compute the local coordinates from a linear offset.
Definition: NanoVDB.h:4393
__hostdev__ const math::BBox< CoordType > & bbox() const
Return a const reference to the bounding box in index space of active values in this internal node an...
Definition: NanoVDB.h:3471
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3488
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this root node...
Definition: NanoVDB.h:3022
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4179
__hostdev__ bool operator<=(const Version &rhs) const
Definition: NanoVDB.h:697
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:3952
T ElementType
Definition: NanoVDB.h:732
bool Type
Definition: NanoVDB.h:6177
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2107
float Type
Definition: NanoVDB.h:528
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4965
__hostdev__ auto pos() const
Definition: NanoVDB.h:2677
uint64_t Type
Definition: NanoVDB.h:472
__hostdev__ uint32_t getMinor() const
Definition: NanoVDB.h:702
Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode) ...
Definition: NanoVDB.h:2567
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3301
Data encoded at the head of each segment of a file or stream.
Definition: NanoVDB.h:5818
__hostdev__ ValueIterator operator++(int)
Definition: NanoVDB.h:4342
__hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const
Return the index of the first blind data with specified semantic if found, otherwise -1...
Definition: NanoVDB.h:2312
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3716
__hostdev__ ValueOffIterator(const LeafNode *parent)
Definition: NanoVDB.h:4281
openvdb::GridBase Grid
Definition: Utils.h:43
__hostdev__ Mask(bool on)
Definition: NanoVDB.h:1137
__hostdev__ void setOff(uint32_t n)
Set the specified bit off.
Definition: NanoVDB.h:1224
__hostdev__ const char * gridName() const
Return a c-string with the name of this grid.
Definition: NanoVDB.h:2259
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4287
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:5508
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2405
double FloatType
Definition: NanoVDB.h:758
Version version
Definition: NanoVDB.h:5856
__hostdev__ int blindDataCount() const
Definition: NanoVDB.h:5529
uint64_t type
Definition: NanoVDB.h:480
__hostdev__ ChildT * getChild(const Tile *tile)
Returns a const reference to the child node in the specified tile.
Definition: NanoVDB.h:2772
__hostdev__ const Checksum & checksum() const
Return checksum of the grid buffer.
Definition: NanoVDB.h:2265
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5055
GridClass mGridClass
Definition: NanoVDB.h:1908
__hostdev__ Version(uint32_t data)
Constructor from a raw uint32_t data representation.
Definition: NanoVDB.h:686
Dummy type for a voxel whose value equals an offset into an external value array. ...
Definition: NanoVDB.h:172
Maps one type (e.g. the build types above) to other (actual) types.
Definition: NanoVDB.h:456
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2426
__hostdev__ uint32_t nodeCount(uint32_t level) const
Definition: NanoVDB.h:5532
__hostdev__ ValueType getLastValue() const
Return the last value in this leaf node.
Definition: NanoVDB.h:4448
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3759
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4961
__hostdev__ bool isHalf() const
Definition: NanoVDB.h:1837
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4086
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3950
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:2599
__hostdev__ ValueIterator(const LeafNode *parent)
Definition: NanoVDB.h:4314
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4813
typename RootT::ValueType ValueType
Definition: NanoVDB.h:4807
__hostdev__ DataT * data() const
Definition: NanoVDB.h:2693
__hostdev__ uint32_t id() const
Definition: NanoVDB.h:700
__hostdev__ size_t memUsage() const
Definition: NanoVDB.h:3881
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:4188
__hostdev__ const NodeT * getFirstNode() const
return a const pointer to the first node of the specified type
Definition: NanoVDB.h:2505
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:2818
__hostdev__ uint32_t getPatch() const
Definition: NanoVDB.h:703
Definition: NanoVDB.h:2915
__hostdev__ DenseIter(RootT *parent)
Definition: NanoVDB.h:2956
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this internal ...
Definition: NanoVDB.h:3468
__hostdev__ void setOn(uint32_t n)
Set the specified bit on.
Definition: NanoVDB.h:1222
__hostdev__ const uint64_t & firstOffset() const
Definition: NanoVDB.h:4052
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4259
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1497
__hostdev__ bool isCompatible() const
Definition: NanoVDB.h:704
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4956
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5125
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3820
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5132
__hostdev__ const ValueType & background() const
Return a const reference to the background value.
Definition: NanoVDB.h:2451
const typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1695
__hostdev__ int age() const
Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER.
Definition: NanoVDB.h:708
__hostdev__ bool isRootNext() const
return true if RootData is layout out immediately after TreeData in memory
Definition: NanoVDB.h:2371
__hostdev__ NodeT * getFirstNode()
return a pointer to the first node of the specified type
Definition: NanoVDB.h:2495
__hostdev__ const NodeTrait< RootT, 2 >::type * getFirstUpper() const
Definition: NanoVDB.h:2535
CheckMode
List of different modes for computing for a checksum.
Definition: NanoVDB.h:1764
__hostdev__ void setAvg(const FloatType &v)
Definition: NanoVDB.h:3672
__hostdev__ uint8_t bitWidth() const
Definition: NanoVDB.h:3880
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this root node and any of...
Definition: NanoVDB.h:3016
__hostdev__ ValueIter operator++(int)
Definition: NanoVDB.h:2900
bool isValid() const
Definition: NanoVDB.h:5823
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3161
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4321
uint16_t gridCount
Definition: NanoVDB.h:5821
__hostdev__ void extrema(ValueType &min, ValueType &max) const
Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2555
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1076
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3731
__hostdev__ T & getValue(const math::Coord &ijk, T *channelPtr) const
Return the value from a specified channel that maps to the specified coordinate.
Definition: NanoVDB.h:5758
typename Node2::ChildNodeType Node1
Definition: NanoVDB.h:2415
Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
Definition: NanoVDB.h:187
static __hostdev__ uint64_t memUsage()
return memory usage in bytes for the class
Definition: NanoVDB.h:2429
RootT RootNodeType
Definition: NanoVDB.h:2404
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1991
__hostdev__ uint64_t first(uint32_t i) const
Definition: NanoVDB.h:4177
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4128
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:5519
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:2234
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5061
__hostdev__ NodeT * child() const
Definition: NanoVDB.h:2713
uint32_t countOn(uint64_t v)
Definition: Util.h:622
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, uint32_t channelID=0u)
Ctor from an IndexGrid and an integer ID of an internal channel that is assumed to exist as blind dat...
Definition: NanoVDB.h:5689
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5567
void ArrayType
Definition: NanoVDB.h:4037
__hostdev__ ChannelT & operator()(const math::Coord &ijk) const
Definition: NanoVDB.h:5743
__hostdev__ uint32_t countOn(uint32_t i) const
Return the number of lower set bits in mask up to but excluding the i&#39;th bit.
Definition: NanoVDB.h:1052
__hostdev__ ChildIter()
Definition: NanoVDB.h:2853
const std::enable_if<!VecTraits< T >::IsVec, T >::type & min(const T &a, const T &b)
Definition: Composite.h:106
__hostdev__ bool hasStats() const
Definition: NanoVDB.h:4050
Definition: NanoVDB.h:1549
__hostdev__ uint64_t memUsage() const
Return the actual memory footprint of this root node.
Definition: NanoVDB.h:3028
int64_t child
Definition: NanoVDB.h:2638
__hostdev__ void fill(const ValueType &v)
Definition: NanoVDB.h:3681
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:4008
BuildT ArrayType
Definition: NanoVDB.h:3624
uint32_t mBlindMetadataCount
Definition: NanoVDB.h:1911
Type Pow2(Type x)
Return x2.
Definition: Math.h:548
__hostdev__ OffIterator beginOff() const
Definition: NanoVDB.h:1127
__hostdev__ DenseIterator beginDense()
Definition: NanoVDB.h:2980
BuildT BuildType
Definition: NanoVDB.h:4905
Version version
Definition: NanoVDB.h:5820
__hostdev__ bool getMin() const
Definition: NanoVDB.h:3953
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4364
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4850
__hostdev__ bool getMax() const
Definition: NanoVDB.h:4007
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2926
bool BuildType
Definition: NanoVDB.h:3937
math::Extrema extrema(const IterT &iter, bool threaded=true)
Iterate over a scalar grid and compute extrema (min/max) of the values of the voxels that are visited...
Definition: Statistics.h:354
__hostdev__ CoordT origin() const
Definition: NanoVDB.h:2636
__hostdev__ bool operator<(const Version &rhs) const
Definition: NanoVDB.h:696
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5133
__hostdev__ const Vec3dBBox & worldBBox() const
return AABB of active values in world space
Definition: NanoVDB.h:2066
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3962
__hostdev__ uint8_t flags() const
Definition: NanoVDB.h:4385
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:3212
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2106
__hostdev__ bool isEmpty() const
Return true if this RootNode is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:3031
VecT< GridHandleT > readUncompressedGrids(const char *fileName, const typename GridHandleT::BufferType &buffer=typename GridHandleT::BufferType())
Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
Definition: NanoVDB.h:5988
uint64_t Type
Definition: NanoVDB.h:535
CoordT mBBoxMin
Definition: NanoVDB.h:4040
__hostdev__ uint32_t checksum(int i) const
Definition: NanoVDB.h:1825
__hostdev__ bool operator!=(const Mask &other) const
Definition: NanoVDB.h:1195
CoordT CoordType
Definition: NanoVDB.h:5037
Dummy type for a variable bit quantization of floating point values.
Definition: NanoVDB.h:199
__hostdev__ Vec3T indexToWorldF(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2197
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:5509
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:5518
__hostdev__ const MaskType< LOG2DIM > & getChildMask() const
Definition: NanoVDB.h:3450
StatsT mAverage
Definition: NanoVDB.h:2605
Visits all values in a leaf node, i.e. both active and inactive values.
Definition: NanoVDB.h:4303
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:3224
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:2241
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3338
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2635
Visits active tile values of this node only.
Definition: NanoVDB.h:3350
__hostdev__ const NodeTrait< RootT, LEVEL >::type * getFirstNode() const
return a const pointer to the first node of the specified level
Definition: NanoVDB.h:2524
#define NANOVDB_HOSTDEV_DISABLE_WARNING
Definition: Util.h:94
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3650
Visits all inactive values in a leaf node.
Definition: NanoVDB.h:4270
__hostdev__ const TreeType & tree() const
Return a const reference to the tree of the IndexGrid.
Definition: NanoVDB.h:5716
Like ValueIndex but with a mutable mask.
Definition: NanoVDB.h:178
typename RootT::ValueType ValueType
Definition: NanoVDB.h:2408
static __hostdev__ uint64_t memUsage(uint32_t tableSize)
Return the expected memory footprint in bytes with the specified number of tiles. ...
Definition: NanoVDB.h:3025
Definition: NanoVDB.h:1749
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:4350
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3999
GridMetaData(const NanoGrid< T > &grid)
Definition: NanoVDB.h:5469
float FloatType
Definition: NanoVDB.h:752
__hostdev__ CheckMode mode() const
return the mode of the 64 bit checksum
Definition: NanoVDB.h:1848
__hostdev__ bool isMask() const
Definition: NanoVDB.h:2236
__hostdev__ Vec3T applyJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1993
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4959
__hostdev__ void setMax(const bool &)
Definition: NanoVDB.h:3964
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2105
OutGridT const XformOp bool bool
Definition: ValueTransformer.h:609
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:2570
typename BuildT::ValueType ValueType
Definition: NanoVDB.h:2109
__hostdev__ uint32_t nodeCount() const
Return number of nodes at LEVEL.
Definition: NanoVDB.h:2031
float ValueType
Definition: NanoVDB.h:3702
__hostdev__ Mask & operator&=(const Mask &other)
Bitwise intersection.
Definition: NanoVDB.h:1299
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5138
__hostdev__ const TreeT & tree() const
Return a const reference to the tree.
Definition: NanoVDB.h:2154
__hostdev__ bool safeCast() const
return true if the RootData follows right after the TreeData. If so, this implies that it&#39;s safe to c...
Definition: NanoVDB.h:5492
uint32_t findLowestOn(uint32_t v)
Returns the index of the lowest, i.e. least significant, on bit in the specified 32 bit word...
Definition: Util.h:502
__hostdev__ ChannelT & getValue(const math::Coord &ijk) const
Return the value from a cached channel that maps to the specified coordinate.
Definition: NanoVDB.h:5742
uint64_t mData1
Definition: NanoVDB.h:1913
BitFlags(Type mask)
Definition: NanoVDB.h:928
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:3412
bool streq(const char *lhs, const char *rhs)
Test if two null-terminated byte strings are the same.
Definition: Util.h:268
__hostdev__ Vec3T worldToIndexDirF(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2207
__hostdev__ BaseIter(DataT *data)
Definition: NanoVDB.h:2834
__hostdev__ Iterator()
Definition: NanoVDB.h:1064
typename ChildT::template MaskType< LOG2DIM > MaskT
Definition: NanoVDB.h:3133
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4106
BitFlags< 32 > mFlags
Definition: NanoVDB.h:1900
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1995
__hostdev__ ValueOnIter operator++(int)
Definition: NanoVDB.h:2933
uint8_t mFlags
Definition: NanoVDB.h:4158
__hostdev__ void setAvg(const bool &)
Definition: NanoVDB.h:3965
__hostdev__ void setMin(const ValueType &v)
Definition: NanoVDB.h:3670
__hostdev__ bool getMin() const
Definition: NanoVDB.h:4006
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:2232
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:3308
void writeUncompressedGrid(StreamT &os, const GridData *gridData, bool raw=false)
This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also works if client code only includes PNanoVDB.h!
Definition: NanoVDB.h:5884
__hostdev__ bool isEmpty() const
test if the grid is empty, e.i the root table has size 0
Definition: NanoVDB.h:2080
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5634
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:3291
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3708
#define __device__
Definition: Util.h:79
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1980
int64_t mBlindMetadataOffset
Definition: NanoVDB.h:1910
float mTaperF
Definition: NanoVDB.h:1381
Implements Tree::probeLeaf(math::Coord)
Definition: NanoVDB.h:1757
__hostdev__ ChildIter(RootT *parent)
Definition: NanoVDB.h:2854
__hostdev__ void setValue(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3651
__hostdev__ CoordBBox bbox() const
Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2368
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4043
typename RootT::CoordType CoordType
Definition: NanoVDB.h:4808
__hostdev__ MagicType toMagic(uint64_t magic)
maps 64 bits of magic number to enum
Definition: NanoVDB.h:367
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3436
GridClass gridClass
Definition: NanoVDB.h:5847
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:2230
Codec codec
Definition: NanoVDB.h:5822
__hostdev__ ChildT * probeChild(const CoordT &ijk)
Definition: NanoVDB.h:2758
RootType RootNodeType
Definition: NanoVDB.h:2104
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5300
static __hostdev__ uint64_t memUsage()
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2063
uint64_t gridSize
Definition: NanoVDB.h:5845
__hostdev__ void setValueOnly(const CoordT &ijk, const ValueType &v)
Definition: NanoVDB.h:4459
__hostdev__ const NodeT * getNode() const
Return a const point to the cached node of the specified type.
Definition: NanoVDB.h:5284
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3845
__hostdev__ bool isFloatingPoint(GridType gridType)
return true if the GridType maps to a floating point type
Definition: NanoVDB.h:558
CoordT mBBoxMin
Definition: NanoVDB.h:3942
__hostdev__ void setOff()
Set all bits off.
Definition: NanoVDB.h:1279
__hostdev__ void localToGlobalCoord(Coord &ijk) const
modifies local coordinates to global coordinates of a tile or child node
Definition: NanoVDB.h:3522
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:2790
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3333
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3492
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3038
__hostdev__ Vec3T indexToWorldGrad(const Vec3T &grad) const
transform the gradient from index space to world space.
Definition: NanoVDB.h:2189
__hostdev__ uint64_t activeVoxelCount() const
Computes a AABB of active values in world space.
Definition: NanoVDB.h:2224
__hostdev__ const LeafNode * probeLeaf(const CoordT &) const
Definition: NanoVDB.h:4483
__hostdev__ uint64_t * words()
Return a pointer to the list of words of the bit mask.
Definition: NanoVDB.h:1152
__hostdev__ void init(float min, float max, uint8_t bitWidth)
Definition: NanoVDB.h:3725
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4849
__hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
Constructor from major.minor.patch version numbers.
Definition: NanoVDB.h:688
__hostdev__ Mask & operator|=(const Mask &other)
Bitwise union.
Definition: NanoVDB.h:1307
static __hostdev__ uint32_t voxelCount()
Return the total number of voxels (e.g. values) encoded in this leaf node.
Definition: NanoVDB.h:4426
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:3130
__hostdev__ DataType * data()
Definition: NanoVDB.h:4362
__hostdev__ Mask & operator-=(const Mask &other)
Bitwise difference.
Definition: NanoVDB.h:1315
__hostdev__ Checksum(uint64_t checksum, CheckMode mode=CheckMode::Full)
Definition: NanoVDB.h:1814
static __hostdev__ bool safeCast(const NanoGrid< T > &grid)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5503
__hostdev__ ValueIterator beginValue()
Definition: NanoVDB.h:2911
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5273
uint64_t mPointCount
Definition: NanoVDB.h:4162
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4847
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4049
__hostdev__ CoordType origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:3453
__hostdev__ DenseIterator(uint32_t pos=Mask::SIZE)
Definition: NanoVDB.h:1098
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4292
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4137
PointAccessor(const NanoGrid< BuildT > &grid)
Definition: NanoVDB.h:5550
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arit...
Definition: NanoVDB.h:1506
__hostdev__ void setValueOnly(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4180
__hostdev__ uint64_t getIndex(const math::Coord &ijk) const
Return the linear offset into a channel that maps to the specified coordinate.
Definition: NanoVDB.h:5738
ValueT mMinimum
Definition: NanoVDB.h:3152
__hostdev__ bool setGridName(const char *src)
Definition: NanoVDB.h:1970
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4129
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3945
__hostdev__ void localToGlobalCoord(Coord &ijk) const
Converts (in place) a local index coordinate to a global index coordinate.
Definition: NanoVDB.h:4401
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5250
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4199
uint64_t ValueType
Definition: NanoVDB.h:4150
Dummy type for a 8bit quantization of float point values.
Definition: NanoVDB.h:193
__hostdev__ DataType * data()
Definition: NanoVDB.h:2424
typename NanoLeaf< BuildT >::ValueType ValueType
Definition: NanoVDB.h:6212
MagicType
Enums used to identify magic numbers recognized by NanoVDB.
Definition: NanoVDB.h:358
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5383
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2963
Dummy type for a voxel whose value equals its binary active state.
Definition: NanoVDB.h:184
uint8_t mFlags
Definition: NanoVDB.h:4042
uint64_t mPrefixSum
Definition: NanoVDB.h:4044
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:2841
__hostdev__ Vec3T applyJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1982
__hostdev__ util::enable_if< util::is_same< T, Point >::value, const uint64_t & >::type pointCount() const
Return the total number of points indexed by this PointGrid.
Definition: NanoVDB.h:2151
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5078
typename util::match_const< ChildT, DataT >::type NodeT
Definition: NanoVDB.h:2662
__hostdev__ ChildIter(ParentT *parent)
Definition: NanoVDB.h:3280
uint32_t mGridIndex
Definition: NanoVDB.h:1901
__hostdev__ ValueOnIterator(const LeafNode *parent)
Definition: NanoVDB.h:4248
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5131
uint64_t mVoxelCount
Definition: NanoVDB.h:2347
static __hostdev__ uint32_t CoordToOffset(const CoordType &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:3506
Vec3d voxelSize
Definition: NanoVDB.h:5850
__hostdev__ uint32_t nodeCount() const
Definition: NanoVDB.h:2474
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5130
uint64_t type
Definition: NanoVDB.h:466
GridBlindDataSemantic mSemantic
Definition: NanoVDB.h:1555
__hostdev__ Vec3T applyMap(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1431
CoordT CoordType
Definition: NanoVDB.h:5245
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3377
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3647
__hostdev__ const CoordBBox & indexBBox() const
return AABB of active values in index space
Definition: NanoVDB.h:2069
__hostdev__ bool isFloatingPointVector(GridType gridType)
return true if the GridType maps to a floating point vec3.
Definition: NanoVDB.h:572
ValueT mBackground
Definition: NanoVDB.h:2602
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1724
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:3356
__hostdev__ bool isInteger(GridType gridType)
Return true if the GridType maps to a POD integer type.
Definition: NanoVDB.h:584
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2435
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5644
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findPrev(uint32_t start) const
Definition: NanoVDB.h:1357
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3036
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2892
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4938
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1710
Codec codec
Definition: NanoVDB.h:5854
Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation...
Definition: NanoVDB.h:1376
uint64_t FloatType
Definition: NanoVDB.h:4036
float Type
Definition: NanoVDB.h:507
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache Noop since this template specializa...
Definition: NanoVDB.h:4832
__hostdev__ const Tile * probeTile(const CoordT &ijk) const
Definition: NanoVDB.h:2753
#define NANOVDB_ASSERT(x)
Definition: Util.h:50
char mName[MaxNameSize]
Definition: NanoVDB.h:1558
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1717
GridType
List of types that are currently supported by NanoVDB.
Definition: NanoVDB.h:220
Vec3dBBox worldBBox
Definition: NanoVDB.h:5848
uint32_t mValueSize
Definition: NanoVDB.h:1554
typename BuildT::CoordType CoordType
Definition: NanoVDB.h:2111
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:4267
GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
Definition: NanoVDB.h:1573
__hostdev__ DenseIterator & operator++()
Definition: NanoVDB.h:1106
__hostdev__ void setOn()
Set all bits on.
Definition: NanoVDB.h:1273
__hostdev__ FloatType average() const
Return a const reference to the average of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4377
Class to access points at a specific voxel location.
Definition: NanoVDB.h:5543
__hostdev__ Mask & operator^=(const Mask &other)
Bitwise XOR.
Definition: NanoVDB.h:1323
static __hostdev__ bool safeCast(const GridData *gridData)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5496
ValueT mMaximum
Definition: NanoVDB.h:2604
static __hostdev__ uint64_t alignmentPadding(const void *p)
return the smallest number of bytes that when added to the specified pointer results in a 32 byte ali...
Definition: NanoVDB.h:545
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4912
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:1083
ValueT mMinimum
Definition: NanoVDB.h:2603
__hostdev__ ChildIter operator++(int)
Definition: NanoVDB.h:2866
bool FloatType
Definition: NanoVDB.h:794
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Return the total number of active tiles at the specified level of the tree.
Definition: NanoVDB.h:2467
C++11 implementation of std::is_floating_point.
Definition: Util.h:329
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:4191
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3328
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4834
static void * memzero(void *dst, size_t byteCount)
Zero initialization of memory.
Definition: Util.h:297
uint64_t mValueCount
Definition: NanoVDB.h:1553
__hostdev__ DataType * data()
Definition: NanoVDB.h:3434
const typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1739
__hostdev__ const BlindDataT * getBlindData() const
Get a const pointer to the blind data represented by this meta data.
Definition: NanoVDB.h:1634
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:3226
__hostdev__ ValueT getValue(uint32_t n) const
Definition: NanoVDB.h:3194
static DstT * PtrAdd(void *p, int64_t offset)
Adds a byte offset to a non-const pointer to produce another non-const pointer.
Definition: Util.h:478
__hostdev__ void setValue(const CoordT &ijk, const ValueType &v)
Sets the value at the specified location and activate its state.
Definition: NanoVDB.h:4453
__hostdev__ ValueOnIter & operator++()
Definition: NanoVDB.h:2927
__hostdev__ float getAvg() const
return the quantized average of the active values in this node
Definition: NanoVDB.h:3740
Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data and one f...
Definition: NanoVDB.h:1790
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5344
__hostdev__ uint64_t activeVoxelCount() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2460
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4845
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:5506
__hostdev__ float getMax() const
return the quantized maximum of the active values in this node
Definition: NanoVDB.h:3737
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:3129
__hostdev__ bool getDev() const
Definition: NanoVDB.h:3956
Implements Tree::getDim(math::Coord)
Definition: NanoVDB.h:1753
Definition: NanoVDB.h:2827
__hostdev__ const ValueType & background() const
Return the total number of active voxels in the root and all its child nodes.
Definition: NanoVDB.h:3003
__hostdev__ DenseIterator beginDense() const
Definition: NanoVDB.h:3425
Codec
Define compression codecs.
Definition: NanoVDB.h:5790
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Definition: NanoVDB.h:1986
__hostdev__ uint32_t countOn() const
Return the total number of set bits in this Mask.
Definition: NanoVDB.h:1043
uint8_t mFlags
Definition: NanoVDB.h:3707
__hostdev__ bool isChild(uint32_t n) const
Definition: NanoVDB.h:3206
Internal nodes of a VDB tree.
Definition: NanoVDB.h:3241
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:4243
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:4189
__hostdev__ ConstDenseIterator cbeginChildAll() const
Definition: NanoVDB.h:2982
static __hostdev__ T * alignPtr(T *p)
offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers...
Definition: NanoVDB.h:553
__hostdev__ bool isOn() const
Return true if all the bits are set in this Mask.
Definition: NanoVDB.h:1204
__hostdev__ ConstTileIterator probe(const CoordT &ijk) const
Definition: NanoVDB.h:2739
ValueT ValueType
Definition: NanoVDB.h:5244
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4844
BuildT TreeType
Definition: NanoVDB.h:2102
Base-class for quantized float leaf nodes.
Definition: NanoVDB.h:3698
uint64_t FloatType
Definition: NanoVDB.h:788
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:3147
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3039
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:2788
__hostdev__ Vec3T worldToIndexF(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2193
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3949
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1709
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4103
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3417
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:4945
__hostdev__ Vec3T worldToIndex(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2170
Definition: NanoVDB.h:2342
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4173
C++11 implementation of std::is_same.
Definition: Util.h:314
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5262
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3714
__hostdev__ void setValue(uint32_t offset, bool v)
Definition: NanoVDB.h:3957
__hostdev__ bool isActive() const
Return true if this node or any of its child nodes contain active values.
Definition: NanoVDB.h:3536
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel (regardless of state or location in the tree.) ...
Definition: NanoVDB.h:2438
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4083
Struct with all the member data of the InternalNode (useful during serialization of an openvdb Intern...
Definition: NanoVDB.h:3127
static __hostdev__ constexpr int64_t memUsage()
Definition: NanoVDB.h:3813
__hostdev__ const NanoGrid< Point > & grid() const
Definition: NanoVDB.h:5630
TileT * mPos
Definition: NanoVDB.h:2663
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3844
const typename GridT::TreeType Type
Definition: NanoVDB.h:2386
Dummy type for a 4bit quantization of float point values.
Definition: NanoVDB.h:190
__hostdev__ bool operator!=(const Checksum &rhs) const
return true if the checksums are not identical
Definition: NanoVDB.h:1860
uint32_t Type
Definition: NanoVDB.h:6113
__hostdev__ uint64_t gridSize() const
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2131
__hostdev__ Version version() const
Definition: NanoVDB.h:5536
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:3132
uint64_t mCRC64
Definition: NanoVDB.h:1796
__hostdev__ uint64_t & full()
Definition: NanoVDB.h:1828
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4012
Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6139
uint64_t type
Definition: NanoVDB.h:487
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3951
const typename GridT::TreeType type
Definition: NanoVDB.h:2387
__hostdev__ NodeTrait< RootT, 1 >::type * getFirstLower()
Definition: NanoVDB.h:2532
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4958
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this internal node and any of its child nodes...
Definition: NanoVDB.h:3465
__hostdev__ void setBlindData(const void *blindData)
Definition: NanoVDB.h:1611
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:3211
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5655
uint8_t mFlags
Definition: NanoVDB.h:3944
T type
Definition: Util.h:387
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4105
__hostdev__ void setBBoxOn(bool on=true)
Definition: NanoVDB.h:1966
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:2237
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:2611
__hostdev__ uint32_t head() const
Definition: NanoVDB.h:1829
T type
Definition: NanoVDB.h:459
__hostdev__ ValueIterator & operator++()
Definition: NanoVDB.h:4337
__hostdev__ bool setName(const char *name)
Sets the name string.
Definition: NanoVDB.h:1619
__hostdev__ uint32_t blindDataCount() const
Return true if this grid is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2271
__hostdev__ void setChild(uint32_t n, const void *ptr)
Definition: NanoVDB.h:3169
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1984
__hostdev__ bool operator==(const Version &rhs) const
Definition: NanoVDB.h:695
Struct with all the member data of the Grid (useful during serialization of an openvdb grid) ...
Definition: NanoVDB.h:1894
typename ChildT::template MaskType< LOG2 > MaskType
Definition: NanoVDB.h:3253
auto callNanoGrid(GridDataT *gridData, ArgsT &&...args)
Below is an example of the struct used for generic programming with callNanoGrid. ...
Definition: NanoVDB.h:4720
Implements Tree::isActive(math::Coord)
Definition: NanoVDB.h:1751
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1476
Definition: NanoVDB.h:723
__hostdev__ bool probeValue(const CoordT &ijk, ValueType &v) const
Return true if the voxel value at the given coordinate is active and updates v with the value...
Definition: NanoVDB.h:4476
__hostdev__ NodeTrait< RootT, 2 >::type * getFirstUpper()
Definition: NanoVDB.h:2534
__hostdev__ void toggle()
brief Toggle the state of all bits in the mask
Definition: NanoVDB.h:1291
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:2233
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4194
uint32_t mData0
Definition: NanoVDB.h:1912
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4254
Dummy type for indexing points into voxels.
Definition: NanoVDB.h:202
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:3446
__hostdev__ const void * blindData() const
returns a const void point to the blind data
Definition: NanoVDB.h:1623
__hostdev__ ValueType getValue(const CoordT &ijk) const
Return the voxel value at the given coordinate.
Definition: NanoVDB.h:4443
static __hostdev__ size_t memUsage()
Return memory usage in bytes for the class.
Definition: NanoVDB.h:3442
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:3286
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:3131
Definition: NanoVDB.h:897
__hostdev__ bool isActive(const CoordType &ijk) const
Return the active state of the given voxel (regardless of state or location in the tree...
Definition: NanoVDB.h:2442
__hostdev__ const ChildT * getChild(uint32_t n) const
Definition: NanoVDB.h:3188
uint32_t findHighestOn(uint32_t v)
Returns the index of the highest, i.e. most significant, on bit in the specified 32 bit word...
Definition: Util.h:572
__hostdev__ uint64_t activeVoxelCount() const
Definition: NanoVDB.h:5530
bool Type
Definition: NanoVDB.h:493
__hostdev__ const ChildT * probeChild(const CoordT &ijk) const
Definition: NanoVDB.h:2764
Definition: NanoVDB.h:1095
__hostdev__ ValueType getFirstValue() const
If the first entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getFirstValue() on the child.
Definition: NanoVDB.h:3475
StatsT mAverage
Definition: NanoVDB.h:3154
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3852
Definition: NanoVDB.h:2948
__hostdev__ const Map & map() const
Definition: NanoVDB.h:5525
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:3347
CoordT mBBoxMin
Definition: NanoVDB.h:3705
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:2858
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:2571
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4084
__hostdev__ ValueType getValue(uint32_t offset) const
Return the voxel value at the given offset.
Definition: NanoVDB.h:4440
__hostdev__ ValueIter & operator++()
Definition: NanoVDB.h:2894
MaskT mValueMask
Definition: NanoVDB.h:3149
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findNext(uint32_t start) const
Definition: NanoVDB.h:1343
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:2840
__hostdev__ uint32_t totalNodeCount() const
Definition: NanoVDB.h:2486
uint16_t mMin
Definition: NanoVDB.h:3712
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:2572
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1716
__hostdev__ Vec3d voxelSize() const
Definition: NanoVDB.h:5528
typename FloatTraits< ValueType >::FloatType FloatType
Definition: NanoVDB.h:4152
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:2783
Like ValueOnIndex but with a mutable mask.
Definition: NanoVDB.h:181
GridMetaData(const GridData *gridData)
Definition: NanoVDB.h:5476
const typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1694
__hostdev__ DataType * data()
Definition: NanoVDB.h:2123
MaskT< LOG2DIM > mValues
Definition: NanoVDB.h:3946
This is a convenient class that allows for access to grid meta-data that are independent of the value...
Definition: NanoVDB.h:5460
__hostdev__ TileIterator beginTile()
Definition: NanoVDB.h:2728
__hostdev__ int findBlindData(const char *name) const
Return the index of the first blind data with specified name if found, otherwise -1.
Definition: NanoVDB.h:2322
__hostdev__ uint32_t gridCount() const
Definition: NanoVDB.h:5523
uint32_t mTableSize
Definition: NanoVDB.h:2600
typename BuildT::BuildType BuildType
Definition: NanoVDB.h:2110
typename T::ValueType ElementType
Definition: NanoVDB.h:743
__hostdev__ bool isMask() const
Definition: NanoVDB.h:5513
__hostdev__ uint64_t memUsage() const
return memory usage in bytes for the leaf node
Definition: NanoVDB.h:4431
__hostdev__ bool isSequential() const
return true if the specified node type is laid out breadth-first in memory and has a fixed size...
Definition: NanoVDB.h:2248
Definition: NanoVDB.h:4218
typename RootT::CoordType CoordType
Definition: NanoVDB.h:2410
float type
Definition: NanoVDB.h:529
defines a tree type from a grid type while preserving constness
Definition: NanoVDB.h:2378
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5134
__hostdev__ GridType mapToGridType()
Definition: NanoVDB.h:867
__hostdev__ uint32_t nodeCount(int level) const
Definition: NanoVDB.h:2480
__hostdev__ ChannelT & operator()(int i, int j, int k) const
Definition: NanoVDB.h:5744
__hostdev__ AccessorType getAccessor() const
Return a new instance of a ReadAccessor used to access values in this grid.
Definition: NanoVDB.h:2160
Visits child nodes of this node only.
Definition: NanoVDB.h:3267
__hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:3528
typename remove_const< T >::type type
Definition: Util.h:431
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4000
__hostdev__ void setValue(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4181
__hostdev__ Checksum()
default constructor initiates checksum to EMPTY
Definition: NanoVDB.h:1804
uint64_t Type
Definition: NanoVDB.h:486
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3814
__hostdev__ ValueIterator(const InternalNode *parent)
Definition: NanoVDB.h:3322
typename Mask< 3 >::template Iterator< ON > MaskIterT
Definition: NanoVDB.h:4234
GridType mDataType
Definition: NanoVDB.h:1557
Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
Definition: NanoVDB.h:4215
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4960
__hostdev__ DataType * data()
Definition: NanoVDB.h:2992
__hostdev__ const uint64_t & valueCount() const
Return total number of values indexed by the IndexGrid.
Definition: NanoVDB.h:5722
__hostdev__ NodeTrait< RootT, LEVEL >::type * getFirstNode()
return a pointer to the first node at the specified level
Definition: NanoVDB.h:2515
typename util::match_const< Tile, DataT >::type TileT
Definition: NanoVDB.h:2661
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2634
__hostdev__ Vec3T worldToIndexDir(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2184
__hostdev__ DenseIterator cbeginChildAll() const
Definition: NanoVDB.h:3426
BuildT BuildType
Definition: NanoVDB.h:5035
__hostdev__ uint32_t rootTableSize() const
return the root table has size
Definition: NanoVDB.h:2072
bool FloatType
Definition: NanoVDB.h:3938
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:4473
double mTaperD
Definition: NanoVDB.h:1385
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3422
uint32_t dim
Definition: NanoVDB.h:6216
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3013
MaskT mChildMask
Definition: NanoVDB.h:3150
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:4463
__hostdev__ Version()
Default constructor.
Definition: NanoVDB.h:679
__hostdev__ void setMinMaxOn(bool on=true)
Definition: NanoVDB.h:1965
static __hostdev__ uint32_t valueCount()
Definition: NanoVDB.h:4079
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3630
__hostdev__ const Tile * tile(uint32_t n) const
Returns a pointer to the tile at the specified linear offset.
Definition: NanoVDB.h:2646
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:3213
__hostdev__ ValueType getFirstValue() const
Return the first value in this leaf node.
Definition: NanoVDB.h:4446
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:3381
typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1730
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6085
__hostdev__ ValueOnIterator(const InternalNode *parent)
Definition: NanoVDB.h:3361
__hostdev__ ConstValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:2945
Definition: NanoVDB.h:2616
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4920
typename BuildToValueMap< BuildT >::Type ValueT
Definition: NanoVDB.h:6179
FloatType mAverage
Definition: NanoVDB.h:3634
__hostdev__ TileIter(DataT *data, uint32_t pos=0)
Definition: NanoVDB.h:2667
BuildT BuildType
Definition: NanoVDB.h:4806
ValueT ValueType
Definition: NanoVDB.h:5036
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:3499
float Type
Definition: NanoVDB.h:500
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2812
StatsT mStdDevi
Definition: NanoVDB.h:2606
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4059
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2125
__hostdev__ uint32_t & head()
Definition: NanoVDB.h:1830
ValueT value
Definition: NanoVDB.h:3138
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:4169
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:5515
CoordBBox indexBBox
Definition: NanoVDB.h:5849
const std::enable_if<!VecTraits< T >::IsVec, T >::type & max(const T &a, const T &b)
Definition: Composite.h:110
__hostdev__ uint32_t rootTableSize() const
Definition: NanoVDB.h:5534
__hostdev__ TileIter & operator++()
Definition: NanoVDB.h:2678
__hostdev__ bool isCached1(const CoordType &ijk) const
Definition: NanoVDB.h:5111
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:3200
__hostdev__ ValueOnIterator beginValueOn()
Definition: NanoVDB.h:2944
__hostdev__ const ChildT * getChild(const Tile *tile) const
Definition: NanoVDB.h:2777
__hostdev__ bool isEmpty() const
return true if the 64 bit checksum is disables (unset)
Definition: NanoVDB.h:1843
__hostdev__ Iterator(uint32_t pos, const Mask *parent)
Definition: NanoVDB.h:1069
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5042
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:3659
typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1731
__hostdev__ void * nodePtr()
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2020
__hostdev__ float getMin() const
return the quantized minimum of the active values in this node
Definition: NanoVDB.h:3734
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4962
__hostdev__ ValueOffIterator()
Definition: NanoVDB.h:4276
ChildT ChildNodeType
Definition: NanoVDB.h:3249
typename DataType::BuildT BuildType
Definition: NanoVDB.h:3247
__hostdev__ ValueOffIterator cbeginValueOff() const
Definition: NanoVDB.h:4300
__hostdev__ GridClass toGridClass(GridClass defaultClass=GridClass::Unknown)
Maps from a templated build type to a GridClass enum.
Definition: NanoVDB.h:873
typename DataType::ValueType ValueType
Definition: NanoVDB.h:4226
float type
Definition: NanoVDB.h:522
__hostdev__ uint32_t getMajor() const
Definition: NanoVDB.h:701
__hostdev__ Vec3T indexToWorldGradF(const Vec3T &grad) const
Transforms the gradient from index space to world space.
Definition: NanoVDB.h:2212
__hostdev__ const NodeTrait< TreeT, LEVEL >::type * getNode() const
Definition: NanoVDB.h:5292
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:2239
uint64_t type
Definition: NanoVDB.h:536
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:4190
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3248
__hostdev__ void setValue(const CoordType &k, bool s, const ValueType &v)
Definition: NanoVDB.h:2626
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5341
__hostdev__ const uint64_t * words() const
Definition: NanoVDB.h:1153
__hostdev__ const GridBlindMetaData & blindMetaData(uint32_t n) const
Definition: NanoVDB.h:2305
static __hostdev__ uint32_t bitCount()
Return the number of bits available in this Mask.
Definition: NanoVDB.h:1037
__hostdev__ void setDev(const bool &)
Definition: NanoVDB.h:3966
uint64_t Type
Definition: NanoVDB.h:479
__hostdev__ Vec3T indexToWorldDir(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2179
__hostdev__ const void * getRoot() const
Get a const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2359
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:3214
__hostdev__ bool isActive() const
Return true if any of the voxel value are active in this leaf node.
Definition: NanoVDB.h:4466
GridBlindDataClass
Blind-data Classes that are currently supported by NanoVDB.
Definition: NanoVDB.h:411
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4159
__hostdev__ const void * treePtr() const
Definition: NanoVDB.h:2003
static __hostdev__ size_t memUsage()
Return the memory footprint in bytes of this Mask.
Definition: NanoVDB.h:1034
const typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1738
Visits all active values in a leaf node.
Definition: NanoVDB.h:4237
__hostdev__ const LeafNodeType * getFirstLeaf() const
Definition: NanoVDB.h:2531
__hostdev__ Vec3T indexToWorldDirF(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2202