taichi.lang
¶
Subpackages¶
Submodules¶
taichi.lang.any_array
taichi.lang.common_ops
taichi.lang.enums
taichi.lang.exception
taichi.lang.expr
taichi.lang.field
taichi.lang.impl
taichi.lang.kernel_arguments
taichi.lang.kernel_impl
taichi.lang.matrix
taichi.lang.mesh
taichi.lang.ops
taichi.lang.quant_impl
taichi.lang.runtime_ops
taichi.lang.shell
taichi.lang.snode
taichi.lang.sort
taichi.lang.source_builder
taichi.lang.struct
taichi.lang.tape
taichi.lang.type_factory_impl
taichi.lang.util
Package Contents¶
Classes¶
Taichi ndarray with scalar elements. |
|
Class for arbitrary arrays in Python AST. |
|
Class for first-level access to AnyArray with Vector/Matrix elements in Python AST. |
|
Layout of a Taichi field or ndarray. |
|
A Python-side Expr wrapper, whose member variable ptr is an instance of C++ Expr class. A C++ Expr object contains member variable expr which holds an instance of C++ Expression class. |
|
Taichi field with SNode implementation. |
|
Taichi scalar field with SNode implementation. |
|
The matrix class. |
|
Taichi matrix field with SNode implementation. |
|
A Python-side SNode wrapper. |
|
The Struct type class. |
|
Taichi struct field with SNode implementation. |
|
Kernel profiler of Taichi. |
|
A class to add CUPTI metric for |
|
A builder that constructs a SNodeTree instance. |
Functions¶
|
|
|
|
|
Defines a list of axes to be used by a field. |
|
|
|
|
|
|
Recursively deactivate all SNodes. |
|
|
|
|
|
|
|
|
Groups a list of independent loop indices into a |
|
This method is used only for real functions. It inserts a |
|
Defines a Taichi ndarray with scalar elements. |
|
Fill the input field with one. |
|
Evaluates a Taichi-scope expression at compile time. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Fill the input field with zero. |
|
Marks a class as Taichi compatible. |
|
Marks a function as callable in Taichi-scope. |
|
Marks a function as a Taichi kernel. |
|
Marks a function as callable in both Taichi and Python scopes. |
|
Construct a Vector instance i.e. 1-D Matrix. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The negate function. |
|
The sine function. |
|
The cosine function. |
|
The inverses function of sine. |
|
The inverses function of cosine. |
|
The square root function. |
|
The reciprocal of the square root function. |
|
The round function. |
|
The floor function. |
|
The ceil function. |
|
The tangent function. |
|
The hyperbolic tangent function. |
|
The exp function. |
|
The natural logarithm function. |
|
The absolute value function. |
|
The bit not function. |
|
The logical not function. |
|
The random function. |
|
The add function. |
|
The sub function. |
|
The multiply function. |
|
The remainder function. |
|
The power function. |
|
The floor division function. |
|
True division function. |
|
The maxnimum function. |
|
The minimum function. |
|
The inverses of the tangent function. |
|
Raw_div function. |
|
Raw_mod function. Both a and b can be float. |
|
Compare two values (less than) |
|
Compare two values (less than or equal to) |
|
Compare two values (greater than) |
|
Compare two values (greater than or equal to) |
|
Compare two values (equal to) |
|
Compare two values (not equal to) |
|
Computes bitwise-or |
|
Compute bitwise-and |
|
Compute bitwise-xor |
|
Compute bitwise shift left |
|
Compute bitwise shift right |
|
Compute bitwise shift right (in taichi scope) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Query the memory address (on CUDA/x64) of field f at index indices. |
|
|
|
|
|
Rescales the index 'I' of field (or SNode) 'a' to match the shape of SNode 'b' |
|
|
|
|
Whether has pytorch in the current Python environment. |
|
|
|
|
|
|
|
|
Convert taichi data type to its counterpart in numpy. |
|
Convert taichi data type to its counterpart in torch. |
|
Convert numpy or torch data type to its counterpart in taichi. |
We have only one |
|
|
|
|
|
|
Print warning message |
|
Type annotation for external arrays. |
|
Print the profiling results of Taichi kernels. |
Query kernel elapsed time(min,avg,max) on devices using the kernel name. |
|
Clear all KernelProfiler records. |
|
Get elapsed time of all kernels recorded in KernelProfiler. |
|
|
Set the toolkit used by KernelProfiler. |
|
Set metrics that will be collected by the CUPTI toolkit. |
|
Set temporary metrics that will be collected by the CUPTI toolkit within this context. |
Memory profiling tool for LLVM backends with full sparse support. |
|
|
Checks whether an extension is supported on an arch. |
|
Resets Taichi to its initial state. |
Returns a temporary directory, which will be automatically deleted on exit. |
|
|
Initializes the Taichi runtime. |
|
|
|
Hints Taichi to cache the fields and to enable the BLS optimization. |
|
|
|
|
|
|
|
|
|
Return a context manager of |
Set all fields' gradients to 0. |
|
|
|
|
|
|
|
|
Checks whether an arch is supported on the machine. |
|
|
Attributes¶
Root of the declared Taichi :func:`~taichi.lang.impl.field`s. |
|
Alias for |
|
Alias for |
|
Alias for |
|
Alias for |
|
Alias for |
|
Alias for |
|
Alias for |
|
Alias for |
|
The x64 CPU backend. |
|
The X64 CPU backend. |
|
The ARM CPU backend. |
|
The CUDA backend. |
|
The Apple Metal backend. |
|
The OpenGL backend. OpenGL 4.3 required. |
|
The WebAssembly backend. |
|
The Vulkan backend. |
|
The DX11 backend. |
|
A list of GPU backends supported on the current system. |
|
A list of CPU backends supported on the current system. |
|
- taichi.lang.locale_encode(path)¶
- class taichi.lang.ScalarNdarray(dtype, arr_shape)¶
Bases:
Ndarray
Taichi ndarray with scalar elements.
- Parameters
dtype (DataType) – Data type of each value.
shape (Tuple[int]) – Shape of the ndarray.
- property element_shape(self)¶
Gets ndarray element shape.
- Returns
Ndarray element shape.
- Return type
Tuple[Int]
- to_numpy(self)¶
- from_numpy(self, arr)¶
- fill_by_kernel(self, val)¶
Fills ndarray with a specific scalar value using a ti.kernel.
- Parameters
val (Union[int, float]) – Value to fill.
- class taichi.lang.GroupedNDRange(r)¶
- class taichi.lang.AnyArray(ptr, element_shape, layout)¶
Class for arbitrary arrays in Python AST.
- Parameters
ptr (taichi_core.Expr) – A taichi_core.Expr wrapping a taichi_core.ExternalTensorExpression.
element_shape (Tuple[Int]) – () if scalar elements (default), (n) if vector elements, and (n, m) if matrix elements.
layout (Layout) – Memory layout.
- property shape(self)¶
A list containing sizes for each dimension. Note that element shape will be excluded.
- Returns
The result list.
- Return type
List[Int]
- loop_range(self)¶
Gets the corresponding taichi_core.Expr to serve as loop range.
This is not in use now because struct fors on AnyArrays are not supported yet.
- Returns
See above.
- Return type
taichi_core.Expr
- class taichi.lang.AnyArrayAccess(arr, indices_first)¶
Class for first-level access to AnyArray with Vector/Matrix elements in Python AST.
- Parameters
arr (AnyArray) – See above.
indices_first (Tuple[Int]) – Indices of first-level access.
- subscript(self, i, j)¶
- class taichi.lang.Layout¶
Bases:
enum.Enum
Layout of a Taichi field or ndarray.
Currently, AOS (array of structures) and SOA (structure of arrays) are supported.
- AOS = 1¶
- SOA = 2¶
- exception taichi.lang.InvalidOperationError¶
Bases:
Exception
Common base class for all non-exit exceptions.
- exception taichi.lang.TaichiCompilationError¶
Bases:
Exception
Common base class for all non-exit exceptions.
- exception taichi.lang.TaichiNameError¶
Bases:
TaichiCompilationError
,NameError
Common base class for all non-exit exceptions.
- exception taichi.lang.TaichiSyntaxError¶
Bases:
TaichiCompilationError
,SyntaxError
Common base class for all non-exit exceptions.
- exception taichi.lang.TaichiTypeError¶
Bases:
TaichiCompilationError
,TypeError
Common base class for all non-exit exceptions.
- class taichi.lang.Expr(*args, tb=None)¶
Bases:
taichi.lang.common_ops.TaichiOperations
A Python-side Expr wrapper, whose member variable ptr is an instance of C++ Expr class. A C++ Expr object contains member variable expr which holds an instance of C++ Expression class.
- taichi.lang.make_expr_group(*exprs)¶
- class taichi.lang.Field(_vars)¶
Taichi field with SNode implementation.
A field is constructed by a list of field members. For example, a scalar field has 1 field member, while a 3x3 matrix field has 9 field members. A field member is a Python Expr wrapping a C++ GlobalVariableExpression. A C++ GlobalVariableExpression wraps the corresponding SNode.
- Parameters
vars (List[Expr]) – Field members.
- property snode(self)¶
Gets representative SNode for info purposes.
- Returns
Representative SNode (SNode of first field member).
- Return type
- property shape(self)¶
Gets field shape.
- Returns
Field shape.
- Return type
Tuple[Int]
- property dtype(self)¶
Gets data type of each individual value.
- Returns
Data type of each individual value.
- Return type
DataType
- property name(self)¶
Gets field name.
- Returns
Field name.
- Return type
str
- parent(self, n=1)¶
Gets an ancestor of the representative SNode in the SNode tree.
- Parameters
n (int) – the number of levels going up from the representative SNode.
- Returns
The n-th parent of the representative SNode.
- Return type
- loop_range(self)¶
Gets representative field member for loop range info.
- Returns
Representative (first) field member.
- Return type
taichi_core.Expr
- set_grad(self, grad)¶
Sets corresponding gradient field.
- Parameters
grad (Field) – Corresponding gradient field.
- abstract fill(self, val)¶
Fills self with a specific value.
- Parameters
val (Union[int, float]) – Value to fill.
- abstract to_numpy(self, dtype=None)¶
Converts self to a numpy array.
- Parameters
dtype (DataType, optional) – The desired data type of returned numpy array.
- Returns
The result numpy array.
- Return type
numpy.ndarray
- abstract to_torch(self, device=None)¶
Converts self to a torch tensor.
- Parameters
device (torch.device, optional) – The desired device of returned tensor.
- Returns
The result torch tensor.
- Return type
torch.tensor
- abstract from_numpy(self, arr)¶
Loads all elements from a numpy array.
The shape of the numpy array needs to be the same as self.
- Parameters
arr (numpy.ndarray) – The source numpy array.
- from_torch(self, arr)¶
Loads all elements from a torch tensor.
The shape of the torch tensor needs to be the same as self.
- Parameters
arr (torch.tensor) – The source torch tensor.
- copy_from(self, other)¶
Copies all elements from another field.
The shape of the other field needs to be the same as self.
- Parameters
other (Field) – The source field.
- pad_key(self, key)¶
- initialize_host_accessors(self)¶
- host_access(self, key)¶
- class taichi.lang.ScalarField(var)¶
Bases:
Field
Taichi scalar field with SNode implementation.
- Parameters
var (Expr) – Field member.
- fill(self, val)¶
Fills self with a specific value.
- Parameters
val (Union[int, float]) – Value to fill.
- to_numpy(self, dtype=None)¶
Converts self to a numpy array.
- Parameters
dtype (DataType, optional) – The desired data type of returned numpy array.
- Returns
The result numpy array.
- Return type
numpy.ndarray
- to_torch(self, device=None)¶
Converts self to a torch tensor.
- Parameters
device (torch.device, optional) – The desired device of returned tensor.
- Returns
The result torch tensor.
- Return type
torch.tensor
- from_numpy(self, arr)¶
Loads all elements from a numpy array.
The shape of the numpy array needs to be the same as self.
- Parameters
arr (numpy.ndarray) – The source numpy array.
- taichi.lang.axes(*x: Iterable[int])¶
Defines a list of axes to be used by a field.
- Parameters
*x – A list of axes to be activated
Note that Taichi has already provided a set of commonly used axes. For example, ti.ij is just axes(0, 1) under the hood.
- taichi.lang.begin_frontend_if(cond)¶
- taichi.lang.begin_frontend_struct_for(group, loop_range)¶
- taichi.lang.call_internal(name, *args)¶
- taichi.lang.current_cfg()¶
- taichi.lang.deactivate_all_snodes()¶
Recursively deactivate all SNodes.
- taichi.lang.expr_init(rhs)¶
- taichi.lang.expr_init_func(rhs)¶
- taichi.lang.expr_init_list(xs, expected)¶
- taichi.lang.field(dtype, shape=None, name='', offset=None, needs_grad=False)¶
Defines a Taichi field
A Taichi field can be viewed as an abstract N-dimensional array, hiding away the complexity of how its underlying
SNode
are actually defined. The data in a Taichi field can be directly accessed by a Taichikernel()
.See also https://docs.taichi.graphics/lang/articles/basic/field
- Parameters
dtype (DataType) – data type of the field.
shape (Union[int, tuple[int]], optional) – shape of the field
name (str, optional) – name of the field
offset (Union[int, tuple[int]], optional) – offset of the field domain
needs_grad (bool, optional) – whether this field participates in autodiff and thus needs an adjoint field to store the gradients.
Example
The code below shows how a Taichi field can be declared and defined:
>>> x1 = ti.field(ti.f32, shape=(16, 8)) >>> >>> # Equivalently >>> x2 = ti.field(ti.f32) >>> ti.root.dense(ti.ij, shape=(16, 8)).place(x2)
- taichi.lang.get_runtime()¶
- taichi.lang.grouped(x)¶
Groups a list of independent loop indices into a
Vector()
.- Parameters
x (Any) – does the grouping only if x is a
ndrange
.
Example:
>>> for I in ti.grouped(ndrange(8, 16)): >>> print(I[0] + I[1])
- taichi.lang.insert_expr_stmt_if_ti_func(func, *args, **kwargs)¶
This method is used only for real functions. It inserts a FrontendExprStmt to the C++ AST to hold the function call if func is a Taichi function.
- Parameters
func – The function to be called.
args – The arguments of the function call.
kwargs – The keyword arguments of the function call.
- Returns
The return value of the function call if it’s a non-Taichi function. Returns None if it’s a Taichi function.
- taichi.lang.ndarray(dtype, shape)¶
Defines a Taichi ndarray with scalar elements.
- Parameters
dtype (DataType) – Data type of each value.
shape (Union[int, tuple[int]]) – Shape of the ndarray.
Example
The code below shows how a Taichi ndarray with scalar elements can be declared and defined:
>>> x = ti.ndarray(ti.f32, shape=(16, 8))
- taichi.lang.one(x)¶
Fill the input field with one.
- Parameters
x (DataType) – The input field to fill.
- Returns
The output field, which keeps the shape but filled with one.
- Return type
DataType
- taichi.lang.root¶
Root of the declared Taichi :func:`~taichi.lang.impl.field`s.
See also https://docs.taichi.graphics/lang/articles/advanced/layout
Example:
>>> x = ti.field(ti.f32) >>> ti.root.pointer(ti.ij, 4).dense(ti.ij, 8).place(x)
- taichi.lang.static(x, *xs)¶
Evaluates a Taichi-scope expression at compile time.
static() is what enables the so-called metaprogramming in Taichi. It is in many ways similar to
constexpr
in C++11.See also https://docs.taichi.graphics/lang/articles/advanced/meta.
- Parameters
x (Any) – an expression to be evaluated
*xs (Any) – for Python-ish swapping assignment
Example
The most common usage of static() is for compile-time evaluation:
>>> @ti.kernel >>> def run(): >>> if ti.static(FOO): >>> do_a() >>> else: >>> do_b()
Depending on the value of
FOO
,run()
will be directly compiled into eitherdo_a()
ordo_b()
. Thus there won’t be a runtime condition check.Another common usage is for compile-time loop unrolling:
>>> @ti.kernel >>> def run(): >>> for i in ti.static(range(3)): >>> print(i) >>> >>> # The above is equivalent to: >>> @ti.kernel >>> def run(): >>> print(0) >>> print(1) >>> print(2)
- taichi.lang.static_assert(cond, msg=None)¶
- taichi.lang.static_print(*args, __p=print, **kwargs)¶
- taichi.lang.stop_grad(x)¶
- taichi.lang.subscript(value, *_indices, skip_reordered=False)¶
- taichi.lang.ti_assert(cond, msg, extra_args)¶
- taichi.lang.ti_float(_var)¶
- taichi.lang.ti_format(*args, **kwargs)¶
- taichi.lang.ti_int(_var)¶
- taichi.lang.ti_print(*_vars, sep=' ', end='\n')¶
- taichi.lang.zero(x)¶
Fill the input field with zero.
- Parameters
x (DataType) – The input field to fill.
- Returns
The output field, which keeps the shape but filled with zero.
- Return type
DataType
- exception taichi.lang.KernelArgError(pos, needed, provided)¶
Bases:
Exception
Common base class for all non-exit exceptions.
- exception taichi.lang.KernelDefError¶
Bases:
Exception
Common base class for all non-exit exceptions.
- taichi.lang.data_oriented(cls)¶
Marks a class as Taichi compatible.
To allow for modularized code, Taichi provides this decorator so that Taichi kernels can be defined inside a class.
See also https://docs.taichi.graphics/lang/articles/advanced/odop
Example:
>>> @ti.data_oriented >>> class TiArray: >>> def __init__(self, n): >>> self.x = ti.field(ti.f32, shape=n) >>> >>> @ti.kernel >>> def inc(self): >>> for i in self.x: >>> self.x[i] += 1.0 >>> >>> a = TiArray(32) >>> a.inc()
- Parameters
cls (Class) – the class to be decorated
- Returns
The decorated class.
- taichi.lang.func(fn)¶
Marks a function as callable in Taichi-scope.
This decorator transforms a Python function into a Taichi one. Taichi will JIT compile it into native instructions.
- Parameters
fn (Callable) – The Python function to be decorated
- Returns
The decorated function
- Return type
Callable
Example:
>>> @ti.func >>> def foo(x): >>> return x + 2 >>> >>> @ti.kernel >>> def run(): >>> print(foo(40)) # 42
- taichi.lang.kernel(fn)¶
Marks a function as a Taichi kernel.
A Taichi kernel is a function written in Python, and gets JIT compiled by Taichi into native CPU/GPU instructions (e.g. a series of CUDA kernels). The top-level
for
loops are automatically parallelized, and distributed to either a CPU thread pool or massively parallel GPUs.Kernel’s gradient kernel would be generated automatically by the AutoDiff system.
See also https://docs.taichi.graphics/lang/articles/basic/syntax#kernels.
- Parameters
fn (Callable) – the Python function to be decorated
- Returns
The decorated function
- Return type
Callable
Example:
>>> x = ti.field(ti.i32, shape=(4, 8)) >>> >>> @ti.kernel >>> def run(): >>> # Assigns all the elements of `x` in parallel. >>> for i in x: >>> x[i] = i
- taichi.lang.pyfunc(fn)¶
Marks a function as callable in both Taichi and Python scopes.
When called inside the Taichi scope, Taichi will JIT compile it into native instructions. Otherwise it will be invoked directly as a Python function.
See also
func()
.- Parameters
fn (Callable) – The Python function to be decorated
- Returns
The decorated function
- Return type
Callable
- class taichi.lang.Matrix(n=1, m=1, dt=None, suppress_warning=False)¶
Bases:
taichi.lang.common_ops.TaichiOperations
The matrix class.
- Parameters
n (Union[int, list, tuple, np.ndarray]) – the first dimension of a matrix.
m (int) – the second dimension of a matrix.
dt (DataType) – the element data type.
- is_taichi_class = True¶
- element_wise_binary(self, foo, other)¶
- broadcast_copy(self, other)¶
- element_wise_ternary(self, foo, other, extra)¶
- element_wise_writeback_binary(self, foo, other)¶
- element_wise_unary(self, foo)¶
- linearize_entry_id(self, *args)¶
- set_entry(self, i, j, e)¶
- subscript(self, *indices)¶
- property x(self)¶
Get the first element of a matrix.
- property y(self)¶
Get the second element of a matrix.
- property z(self)¶
Get the third element of a matrix.
- property w(self)¶
Get the fourth element of a matrix.
- property value(self)¶
- to_list(self)¶
- set_entries(self, value)¶
- cast(self, dtype)¶
Cast the matrix element data type.
- Parameters
dtype (DataType) – the data type of the casted matrix element.
- Returns
A new matrix with each element’s type is dtype.
- trace(self)¶
The sum of a matrix diagonal elements.
- Returns
The sum of a matrix diagonal elements.
- inverse(self)¶
The inverse of a matrix.
Note
The matrix dimension should be less than or equal to 4.
- Returns
The inverse of a matrix.
- Raises
Exception – Inversions of matrices with sizes >= 5 are not supported.
- normalized(self, eps=0)¶
Normalize a vector.
- Parameters
eps (Number) – a safe-guard value for sqrt, usually 0.
Examples:
a = ti.Vector([3, 4]) a.normalized() # [3 / 5, 4 / 5] # `a.normalized()` is equivalent to `a / a.norm()`.
Note
Only vector normalization is supported.
- transpose(self)¶
Get the transpose of a matrix.
- Returns
Get the transpose of a matrix.
- determinant(a)¶
Get the determinant of a matrix.
Note
The matrix dimension should be less than or equal to 4.
- Returns
The determinant of a matrix.
- Raises
Exception – Determinants of matrices with sizes >= 5 are not supported.
- static diag(dim, val)¶
Construct a diagonal square matrix.
- Parameters
dim (int) – the dimension of a square matrix.
val (TypeVar) – the diagonal element value.
- Returns
The constructed diagonal square matrix.
- sum(self)¶
Return the sum of all elements.
- norm(self, eps=0)¶
Return the square root of the sum of the absolute squares of its elements.
- Parameters
eps (Number) – a safe-guard value for sqrt, usually 0.
Examples:
a = ti.Vector([3, 4]) a.norm() # sqrt(3*3 + 4*4 + 0) = 5 # `a.norm(eps)` is equivalent to `ti.sqrt(a.dot(a) + eps).`
- Returns
The square root of the sum of the absolute squares of its elements.
- norm_inv(self, eps=0)¶
Return the inverse of the matrix/vector norm. For norm: please see
norm()
.- Parameters
eps (Number) – a safe-guard value for sqrt, usually 0.
- Returns
The inverse of the matrix/vector norm.
- norm_sqr(self)¶
Return the sum of the absolute squares of its elements.
- max(self)¶
Return the maximum element value.
- min(self)¶
Return the minimum element value.
- any(self)¶
Test whether any element not equal zero.
- Returns
True if any element is not equal zero, False otherwise.
- Return type
bool
- all(self)¶
Test whether all element not equal zero.
- Returns
True if all elements are not equal zero, False otherwise.
- Return type
bool
- fill(self, val)¶
Fills the matrix with a specific value in Taichi scope.
- Parameters
val (Union[int, float]) – Value to fill.
- to_numpy(self, keep_dims=False)¶
Converts the Matrix to a numpy array.
- Parameters
keep_dims (bool, optional) – Whether to keep the dimension after conversion. When keep_dims=False, the resulting numpy array should skip the matrix dims with size 1.
- Returns
The result numpy array.
- Return type
numpy.ndarray
- static zero(dt, n, m=None)¶
Construct a Matrix filled with zeros.
- static one(dt, n, m=None)¶
Construct a Matrix filled with ones.
- static unit(n, i, dt=None)¶
Construct an unit Vector (1-D matrix) i.e., a vector with only one entry filled with one and all other entries zeros.
- static identity(dt, n)¶
Construct an identity Matrix with shape (n, n).
- static rotation2d(alpha)¶
- classmethod field(cls, n, m, dtype, shape=None, name='', offset=None, needs_grad=False, layout=Layout.AOS)¶
Construct a data container to hold all elements of the Matrix.
- Parameters
n (int) – The desired number of rows of the Matrix.
m (int) – The desired number of columns of the Matrix.
dtype (DataType, optional) – The desired data type of the Matrix.
shape (Union[int, tuple of int], optional) – The desired shape of the Matrix.
name (string, optional) – The custom name of the field.
offset (Union[int, tuple of int], optional) – The coordinate offset of all elements in a field.
needs_grad (bool, optional) – Whether the Matrix need gradients.
layout (Layout, optional) – The field layout, i.e., Array Of Structure (AOS) or Structure Of Array (SOA).
- Returns
A
Matrix
instance serves as the data container.- Return type
- classmethod ndarray(cls, n, m, dtype, shape, layout=Layout.AOS)¶
Defines a Taichi ndarray with matrix elements.
- Parameters
n (int) – Number of rows of the matrix.
m (int) – Number of columns of the matrix.
dtype (DataType) – Data type of each value.
shape (Union[int, tuple[int]]) – Shape of the ndarray.
layout (Layout, optional) – Memory layout, AOS by default.
Example
The code below shows how a Taichi ndarray with matrix elements can be declared and defined:
>>> x = ti.Matrix.ndarray(4, 5, ti.f32, shape=(16, 8))
- static rows(rows)¶
Construct a Matrix instance by concatenating Vectors/lists row by row.
- static cols(cols)¶
Construct a Matrix instance by concatenating Vectors/lists column by column.
- dot(self, other)¶
Perform the dot product with the input Vector (1-D Matrix).
- Parameters
other (
Matrix
) – The input Vector (1-D Matrix) to perform the dot product.- Returns
The dot product result (scalar) of the two Vectors.
- Return type
DataType
- cross(self, other)¶
Perform the cross product with the input Vector (1-D Matrix).
- class taichi.lang.MatrixField(_vars, n, m)¶
Bases:
taichi.lang.field.Field
Taichi matrix field with SNode implementation.
- Parameters
vars (List[Expr]) – Field members.
n (Int) – Number of rows.
m (Int) – Number of columns.
- get_scalar_field(self, *indices)¶
Creates a ScalarField using a specific field member. Only used for quant.
- Parameters
indices (Tuple[Int]) – Specified indices of the field member.
- Returns
The result ScalarField.
- Return type
- calc_dynamic_index_stride(self)¶
- fill(self, val)¶
Fills self with specific values.
- Parameters
val (Union[Number, List, Tuple, Matrix]) – Values to fill, which should have dimension consistent with self.
- to_numpy(self, keep_dims=False, dtype=None)¶
Converts the field instance to a NumPy array.
- Parameters
keep_dims (bool, optional) – Whether to keep the dimension after conversion. When keep_dims=True, on an n-D matrix field, the numpy array always has n+2 dims, even for 1x1, 1xn, nx1 matrix fields. When keep_dims=False, the resulting numpy array should skip the matrix dims with size 1. For example, a 4x1 or 1x4 matrix field with 5x6x7 elements results in an array of shape 5x6x7x4.
dtype (DataType, optional) – The desired data type of returned numpy array.
- Returns
The result NumPy array.
- Return type
numpy.ndarray
- to_torch(self, device=None, keep_dims=False)¶
Converts the field instance to a PyTorch tensor.
- Parameters
device (torch.device, optional) – The desired device of returned tensor.
keep_dims (bool, optional) – Whether to keep the dimension after conversion. See
to_numpy()
for more detailed explanation.
- Returns
The result torch tensor.
- Return type
torch.tensor
- from_numpy(self, arr)¶
- taichi.lang.Vector(n, dt=None, **kwargs)¶
Construct a Vector instance i.e. 1-D Matrix.
- class taichi.lang.MeshElementFieldProxy(mesh: MeshInstance, element_type: MeshElementType, entry_expr: taichi.lang.impl.Expr)¶
- property ptr(self)¶
- property id(self)¶
- taichi.lang.TetMesh()¶
- taichi.lang.TriMesh()¶
- exception taichi.lang.TaichiSyntaxError¶
Bases:
TaichiCompilationError
,SyntaxError
Common base class for all non-exit exceptions.
- taichi.lang.cook_dtype(dtype)¶
- taichi.lang.is_taichi_class(rhs)¶
- taichi.lang.taichi_scope(func)¶
- taichi.lang.unary_ops = []¶
- taichi.lang.stack_info()¶
- taichi.lang.is_taichi_expr(a)¶
- taichi.lang.wrap_if_not_expr(a)¶
- taichi.lang.unary(foo)¶
- taichi.lang.binary_ops = []¶
- taichi.lang.binary(foo)¶
- taichi.lang.ternary_ops = []¶
- taichi.lang.ternary(foo)¶
- taichi.lang.writeback_binary_ops = []¶
- taichi.lang.writeback_binary(foo)¶
- taichi.lang.cast(obj, dtype)¶
- taichi.lang.bit_cast(obj, dtype)¶
- taichi.lang.neg(a)¶
The negate function.
- taichi.lang.sin(a)¶
The sine function.
- taichi.lang.cos(a)¶
The cosine function.
- taichi.lang.asin(a)¶
The inverses function of sine.
- taichi.lang.acos(a)¶
The inverses function of cosine.
- taichi.lang.sqrt(a)¶
The square root function.
- taichi.lang.rsqrt(a)¶
The reciprocal of the square root function.
- taichi.lang.round(a)¶
The round function.
- taichi.lang.floor(a)¶
The floor function.
- taichi.lang.ceil(a)¶
The ceil function.
- taichi.lang.tan(a)¶
The tangent function.
- taichi.lang.tanh(a)¶
The hyperbolic tangent function.
- taichi.lang.exp(a)¶
The exp function.
- taichi.lang.log(a)¶
The natural logarithm function.
- taichi.lang.abs(a)¶
The absolute value function.
- taichi.lang.bit_not(a)¶
The bit not function.
- taichi.lang.logical_not(a)¶
The logical not function.
- taichi.lang.random(dtype=float)¶
The random function.
- Parameters
dtype (DataType) – Type of the random variable.
- Returns
A random variable whose type is dtype.
- taichi.lang.add(a, b)¶
The add function.
- taichi.lang.sub(a, b)¶
The sub function.
- taichi.lang.mul(a, b)¶
The multiply function.
- taichi.lang.mod(a, b)¶
The remainder function.
- taichi.lang.pow(a, b)¶
The power function.
- taichi.lang.floordiv(a, b)¶
The floor division function.
- taichi.lang.truediv(a, b)¶
True division function.
- taichi.lang.max(a, b)¶
The maxnimum function.
- taichi.lang.min(a, b)¶
The minimum function.
- taichi.lang.atan2(a, b)¶
The inverses of the tangent function.
- taichi.lang.raw_div(a, b)¶
Raw_div function.
- taichi.lang.raw_mod(a, b)¶
Raw_mod function. Both a and b can be float.
- taichi.lang.cmp_lt(a, b)¶
Compare two values (less than)
- taichi.lang.cmp_le(a, b)¶
Compare two values (less than or equal to)
- taichi.lang.cmp_gt(a, b)¶
Compare two values (greater than)
- taichi.lang.cmp_ge(a, b)¶
Compare two values (greater than or equal to)
- taichi.lang.cmp_eq(a, b)¶
Compare two values (equal to)
- taichi.lang.cmp_ne(a, b)¶
Compare two values (not equal to)
- taichi.lang.bit_or(a, b)¶
Computes bitwise-or
- taichi.lang.bit_and(a, b)¶
Compute bitwise-and
- taichi.lang.bit_xor(a, b)¶
Compute bitwise-xor
- taichi.lang.bit_shl(a, b)¶
Compute bitwise shift left
- taichi.lang.bit_sar(a, b)¶
Compute bitwise shift right
- taichi.lang.bit_shr(a, b)¶
Compute bitwise shift right (in taichi scope)
- taichi.lang.logical_or¶
- taichi.lang.logical_and¶
- taichi.lang.select(cond, a, b)¶
- taichi.lang.atomic_add(a, b)¶
- taichi.lang.atomic_sub(a, b)¶
- taichi.lang.atomic_min(a, b)¶
- taichi.lang.atomic_max(a, b)¶
- taichi.lang.atomic_and(a, b)¶
- taichi.lang.atomic_or(a, b)¶
- taichi.lang.atomic_xor(a, b)¶
- taichi.lang.assign(a, b)¶
- taichi.lang.ti_max(*args)¶
- taichi.lang.ti_min(*args)¶
- taichi.lang.ti_any(a)¶
- taichi.lang.ti_all(a)¶
- taichi.lang.quant¶
- taichi.lang.async_flush()¶
- taichi.lang.sync()¶
- class taichi.lang.SNode(ptr)¶
A Python-side SNode wrapper.
For more information on Taichi’s SNode system, please check out these references:
- Arg:
ptr (pointer): The C++ side SNode pointer.
- dense(self, axes, dimensions)¶
Adds a dense SNode as a child component of self.
- Parameters
axes (List[Axis]) – Axes to activate.
dimensions (Union[List[int], int]) – Shape of each axis.
- Returns
The added
SNode
instance.
- pointer(self, axes, dimensions)¶
Adds a pointer SNode as a child component of self.
- Parameters
axes (List[Axis]) – Axes to activate.
dimensions (Union[List[int], int]) – Shape of each axis.
- Returns
The added
SNode
instance.
- static hash(axes, dimensions)¶
Not supported.
- dynamic(self, axis, dimension, chunk_size=None)¶
Adds a dynamic SNode as a child component of self.
- Parameters
axis (List[Axis]) – Axis to activate, must be 1.
dimension (int) – Shape of the axis.
chunk_size (int) – Chunk size.
- Returns
The added
SNode
instance.
- bitmasked(self, axes, dimensions)¶
Adds a bitmasked SNode as a child component of self.
- Parameters
axes (List[Axis]) – Axes to activate.
dimensions (Union[List[int], int]) – Shape of each axis.
- Returns
The added
SNode
instance.
- bit_struct(self, num_bits: int)¶
Adds a bit_struct SNode as a child component of self.
- Parameters
num_bits – Number of bits to use.
- Returns
The added
SNode
instance.
- bit_array(self, axes, dimensions, num_bits)¶
Adds a bit_array SNode as a child component of self.
- Parameters
axes (List[Axis]) – Axes to activate.
dimensions (Union[List[int], int]) – Shape of each axis.
num_bits (int) – Number of bits to use.
- Returns
The added
SNode
instance.
- place(self, *args, offset=None, shared_exponent=False)¶
Places a list of Taichi fields under the self container.
- Parameters
*args (List[ti.field]) – A list of Taichi fields to place.
offset (Union[Number, tuple[Number]]) – Offset of the field domain.
shared_exponent (bool) – Only useful for quant types.
- Returns
The self container.
- lazy_grad(self)¶
Automatically place the adjoint fields following the layout of their primal fields.
Users don’t need to specify
needs_grad
when they define scalar/vector/matrix fields (primal fields) using autodiff. When all the primal fields are defined, usingtaichi.root.lazy_grad()
could automatically generate their corresponding adjoint fields (gradient field).To know more details about primal, adjoint fields and
lazy_grad()
, please see Page 4 and Page 13-14 of DiffTaichi Paper: https://arxiv.org/pdf/1910.00935.pdf
- parent(self, n=1)¶
Gets an ancestor of self in the SNode tree.
- Parameters
n (int) – the number of levels going up from self.
- Returns
The n-th parent of self.
- Return type
Union[None, _Root, SNode]
- path_from_root(self)¶
Gets the path from root to self in the SNode tree.
- Returns
The list of SNodes on the path from root to self.
- Return type
List[Union[_Root, SNode]]
- property dtype(self)¶
Gets the data type of self.
- Returns
The data type of self.
- Return type
DataType
- property id(self)¶
Gets the id of self.
- Returns
The id of self.
- Return type
int
- property shape(self)¶
Gets the number of elements from root in each axis of self.
- Returns
The number of elements from root in each axis of self.
- Return type
Tuple[int]
- loop_range(self)¶
Gets the taichi_core.Expr wrapping the taichi_core.GlobalVariableExpression corresponding to self to serve as loop range.
- Returns
See above.
- Return type
taichi_core.Expr
- property name(self)¶
Gets the name of self.
- Returns
The name of self.
- Return type
str
- property needs_grad(self)¶
Checks whether self has a corresponding gradient
SNode
.- Returns
Whether self has a corresponding gradient
SNode
.- Return type
bool
- get_children(self)¶
Gets all children components of self.
- Returns
All children components of self.
- Return type
List[SNode]
- property num_dynamically_allocated(self)¶
- property cell_size_bytes(self)¶
- property offset_bytes_in_parent_cell(self)¶
- deactivate_all(self)¶
Recursively deactivate all children components of self.
- physical_index_position(self)¶
Gets mappings from virtual axes to physical axes.
- Returns
Mappings from virtual axes to physical axes.
- Return type
Dict[int, int]
- taichi.lang.activate(l, indices)¶
- taichi.lang.append(l, indices, val)¶
- taichi.lang.deactivate(l, indices)¶
- taichi.lang.get_addr(f, indices)¶
Query the memory address (on CUDA/x64) of field f at index indices.
Currently, this function can only be called inside a taichi kernel.
- Parameters
f (Union[ti.field, ti.Vector.field, ti.Matrix.field]) – Input taichi field for memory address query.
indices (Union[int, ti.Vector()]) – The specified field indices of the query.
- Returns
The memory address of f[indices].
- Return type
ti.u64
- taichi.lang.is_active(l, indices)¶
- taichi.lang.length(l, indices)¶
- taichi.lang.rescale_index(a, b, I)¶
Rescales the index ‘I’ of field (or SNode) ‘a’ to match the shape of SNode ‘b’
- Parameters
a (ti.field(), ti.Vector.field, ti.Matrix.field()) – input taichi field or snode
b (ti.field(), ti.Vector.field, ti.Matrix.field()) – output taichi field or snode
I (ti.Vector()) – grouped loop index
- Returns
Ib – rescaled grouped loop index
- Return type
ti.Vector()
- taichi.lang.parallel_sort(keys, values=None)¶
- class taichi.lang.SourceBuilder¶
- classmethod from_file(cls, filename, compile_fn=None, _temp_dir=None)¶
- classmethod from_source(cls, source_code, compile_fn=None)¶
- class taichi.lang.Struct(*args, **kwargs)¶
Bases:
taichi.lang.common_ops.TaichiOperations
The Struct type class. :param entries: keys and values for struct members. :type entries: Dict[str, Union[Dict, Expr, Matrix, Struct]]
- is_taichi_class = True¶
- property keys(self)¶
- property members(self)¶
- property items(self)¶
- register_members(self)¶
- set_entries(self, value)¶
- static make_getter(key)¶
- static make_setter(key)¶
- element_wise_unary(self, foo)¶
- element_wise_binary(self, foo, other)¶
- broadcast_copy(self, other)¶
- element_wise_writeback_binary(self, foo, other)¶
- element_wise_ternary(self, foo, other, extra)¶
- fill(self, val)¶
Fills the Struct with a specific value in Taichi scope.
- Parameters
val (Union[int, float]) – Value to fill.
- to_dict(self)¶
Converts the Struct to a dictionary.
Args:
- Returns
The result dictionary.
- Return type
Dict
- classmethod field(cls, members, shape=None, name='<Struct>', offset=None, needs_grad=False, layout=Layout.AOS)¶
- class taichi.lang.StructField(field_dict, name=None)¶
Bases:
taichi.lang.field.Field
- Taichi struct field with SNode implementation.
Instead of directly contraining Expr entries, the StructField object directly hosts members as Field instances to support nested structs.
- Parameters
field_dict (Dict[str, Field]) – Struct field members.
name (string, optional) – The custom name of the field.
- property name(self)¶
- property keys(self)¶
- property members(self)¶
- property items(self)¶
- static make_getter(key)¶
- static make_setter(key)¶
- register_fields(self)¶
- get_field_members(self)¶
Get A flattened list of all struct elements.
- Returns
A list of struct elements.
- property snode(self)¶
Gets representative SNode for info purposes.
- Returns
Representative SNode (SNode of first field member).
- Return type
- loop_range(self)¶
Gets representative field member for loop range info.
- Returns
Representative (first) field member.
- Return type
taichi_core.Expr
- copy_from(self, other)¶
Copies all elements from another field.
The shape of the other field needs to be the same as self.
- Parameters
other (Field) – The source field.
- fill(self, val)¶
Fills self with a specific value.
- Parameters
val (Union[int, float]) – Value to fill.
- initialize_host_accessors(self)¶
- get_member_field(self, key)¶
Creates a ScalarField using a specific field member. Only used for quant.
- Parameters
key (str) – Specified key of the field member.
- Returns
The result ScalarField.
- Return type
- from_numpy(self, array_dict)¶
- from_torch(self, array_dict)¶
- to_numpy(self)¶
- Converts the Struct field instance to a dictionary of NumPy arrays. The dictionary may be nested when converting
nested structs.
Args: :returns: The result NumPy array. :rtype: Dict[str, Union[numpy.ndarray, Dict]]
- to_torch(self, device=None)¶
- Converts the Struct field instance to a dictionary of PyTorch tensors. The dictionary may be nested when converting
nested structs.
- Parameters
device (torch.device, optional) – The desired device of returned tensor.
- Returns
The result PyTorch tensor.
- Return type
Dict[str, Union[torch.Tensor, Dict]]
- taichi.lang.type_factory¶
- taichi.lang.cook_dtype(dtype)¶
- taichi.lang.has_clangpp()¶
- taichi.lang.has_pytorch()¶
Whether has pytorch in the current Python environment.
- Returns
True if has pytorch else False.
- Return type
bool
- taichi.lang.is_taichi_class(rhs)¶
- taichi.lang.python_scope(func)¶
- taichi.lang.taichi_scope(func)¶
- taichi.lang.to_numpy_type(dt)¶
Convert taichi data type to its counterpart in numpy.
- Parameters
dt (DataType) – The desired data type to convert.
- Returns
The counterpart data type in numpy.
- Return type
DataType
- taichi.lang.to_pytorch_type(dt)¶
Convert taichi data type to its counterpart in torch.
- Parameters
dt (DataType) – The desired data type to convert.
- Returns
The counterpart data type in torch.
- Return type
DataType
- taichi.lang.to_taichi_type(dt)¶
Convert numpy or torch data type to its counterpart in taichi.
- Parameters
dt (DataType) – The desired data type to convert.
- Returns
The counterpart data type in taichi.
- Return type
DataType
- class taichi.lang.KernelProfiler¶
Kernel profiler of Taichi.
Kernel profiler acquires kernel profiling records from backend, counts records in Python scope, and prints the results to the console by
print_info()
.KernelProfiler
now support detailed low-level performance metrics (such as memory bandwidth consumption) in its advanced mode. This mode is only available for the CUDA backend with CUPTI toolkit, i.e. you needti.init(kernel_profiler=True, arch=ti.cuda)
.Note
For details about using CUPTI in Taichi, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.
- COUNT = count¶
- TRACE = trace¶
- set_kernel_profiler_mode(self, mode=False)¶
Turn on or off
KernelProfiler
.
- get_kernel_profiler_mode(self)¶
Get status of
KernelProfiler
.
- set_toolkit(self, toolkit_name='default')¶
- get_total_time(self)¶
Get elapsed time of all kernels recorded in KernelProfiler.
- Returns
total time in second.
- Return type
time (float)
- clear_info(self)¶
Clear all records both in front-end
KernelProfiler
and back-end instanceKernelProfilerBase
.Note
The values of
self._profiling_mode
andself._metric_list
will not be cleared.
- query_info(self, name)¶
For docsting of this function, see
query_kernel_profile_info()
.
- set_metrics(self, metric_list=default_cupti_metrics)¶
For docsting of this function, see
set_kernel_profile_metrics()
.
- collect_metrics_in_context(self, metric_list=default_cupti_metrics)¶
This function is not exposed to user now.
For usage of this function, see
collect_kernel_profile_metrics()
.
- print_info(self, mode=COUNT)¶
Print the profiling results of Taichi kernels.
For usage of this function, see
print_kernel_profile_info()
.- Parameters
mode (str) – the way to print profiling results.
- taichi.lang.get_default_kernel_profiler()¶
We have only one
KernelProfiler
instance(i.e._ti_kernel_profiler
) now.For
KernelProfiler
usingCuptiToolkit
, GPU devices can only work in a certain configuration. Profiling mode and metrics are configured by the host(CPU) via CUPTI APIs, and device(GPU) will use its counter registers to collect specific metrics. So if there are multiple instances ofKernelProfiler
, the device will work in the latest configuration, the profiling configuration of other instances will be changed as a result. For data retention purposes, multiple instances will be considered in the future.
- class taichi.lang.CuptiMetric(name='', header='unnamed_header', val_format=' {:8.0f} ', scale=1.0)¶
A class to add CUPTI metric for
KernelProfiler
.This class is designed to add user selected CUPTI metrics. Only available for the CUDA backend now, i.e. you need
ti.init(kernel_profiler=True, arch=ti.cuda)
. For usage of this class, see examples in funcset_kernel_profile_metrics()
andcollect_kernel_profile_metrics()
.- Parameters
name (str) – name of metric that collected by CUPTI toolkit. used by
set_kernel_profile_metrics()
andcollect_kernel_profile_metrics()
.header (str) – column header of this metric, used by
print_kernel_profile_info()
.val_format (str) – format for print metric value (and unit of this value), used by
print_kernel_profile_info()
.scale (float) – scale of metric value, used by
print_kernel_profile_info()
.
Example:
>>> import taichi as ti >>> ti.init(kernel_profiler=True, arch=ti.cuda) >>> num_elements = 128*1024*1024 >>> x = ti.field(ti.f32, shape=num_elements) >>> y = ti.field(ti.f32, shape=()) >>> y[None] = 0 >>> @ti.kernel >>> def reduction(): >>> for i in x: >>> y[None] += x[i] >>> global_op_atom = ti.CuptiMetric( >>> name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum', >>> header=' global.atom ', >>> val_format=' {:8.0f} ') >>> # add and set user defined metrics >>> profiling_metrics = ti.get_predefined_cupti_metrics('global_access') + [global_op_atom] >>> ti.set_kernel_profile_metrics(profiling_metrics) >>> for i in range(16): >>> reduction() >>> ti.print_kernel_profile_info('trace')
Note
For details about using CUPTI in Taichi, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.
- taichi.lang.default_cupti_metrics¶
- taichi.lang.get_predefined_cupti_metrics(name='')¶
- class taichi.lang.FieldsBuilder¶
A builder that constructs a SNodeTree instance.
Example:
x = ti.field(ti.i32) y = ti.field(ti.f32) fb = ti.FieldsBuilder() fb.dense(ti.ij, 8).place(x) fb.pointer(ti.ij, 8).dense(ti.ij, 4).place(y) # Afer this line, `x` and `y` are placed. No more fields can be placed # into `fb`. # # The tree looks like the following: # (implicit root) # | # +-- dense +-- place(x) # | # +-- pointer +-- dense +-- place(y) fb.finalize()
- classmethod finalized_roots(cls)¶
Gets all the roots of the finalized SNodeTree.
- Returns
A list of the roots of the finalized SNodeTree.
- property ptr(self)¶
- property root(self)¶
- property empty(self)¶
- property finalized(self)¶
- deactivate_all(self)¶
- dense(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])¶
Same as
taichi.lang.snode.SNode.dense()
- pointer(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])¶
- abstract hash(self, indices, dimensions)¶
Same as
taichi.lang.snode.SNode.hash()
- dynamic(self, index: Union[Sequence[_Axis], _Axis], dimension: Union[Sequence[int], int], chunk_size: Optional[int] = None)¶
- bitmasked(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int])¶
- bit_struct(self, num_bits: int)¶
- bit_array(self, indices: Union[Sequence[_Axis], _Axis], dimensions: Union[Sequence[int], int], num_bits: int)¶
- place(self, *args: Any, offset: Optional[Union[Sequence[int], int]] = None, shared_exponent: bool = False)¶
Same as
taichi.lang.snode.SNode.place()
- lazy_grad(self)¶
- finalize(self, raise_warning=True)¶
Constructs the SNodeTree and finalizes this builder.
- Parameters
raise_warning (bool) – Raise warning or not.
- taichi.lang.set_gdb_trigger(on=True)¶
- taichi.lang.warning(msg, warning_type=UserWarning, stacklevel=1)¶
Print warning message
- Parameters
msg (str) – massage to print.
warning_type (builtin warning type) – type of warning.
stacklevel (int) – warning stack level from the caller.
- taichi.lang.any_arr¶
Alias for
ArgAnyArray
.Example:
>>> @ti.kernel >>> def to_numpy(x: ti.any_arr(), y: ti.any_arr()): >>> for i in range(n): >>> x[i] = y[i] >>> >>> y = ti.ndarray(ti.f64, shape=n) >>> ... # calculate y >>> x = numpy.zeros(n) >>> to_numpy(x, y) # `x` will be filled with `y`'s data.
- taichi.lang.ext_arr()¶
Type annotation for external arrays.
External arrays are formally defined as the data from other Python frameworks. For now, Taichi supports numpy and pytorch.
Example:
>>> @ti.kernel >>> def to_numpy(arr: ti.ext_arr()): >>> for i in x: >>> arr[i] = x[i] >>> >>> arr = numpy.zeros(...) >>> to_numpy(arr) # `arr` will be filled with `x`'s data.
- taichi.lang.template¶
Alias for
Template
.
- taichi.lang.f16¶
- taichi.lang.integer_types¶
- taichi.lang.runtime¶
- taichi.lang.i¶
- taichi.lang.j¶
- taichi.lang.k¶
- taichi.lang.l¶
- taichi.lang.ij¶
- taichi.lang.ik¶
- taichi.lang.il¶
- taichi.lang.jk¶
- taichi.lang.jl¶
- taichi.lang.kl¶
- taichi.lang.ijk¶
- taichi.lang.ijl¶
- taichi.lang.ikl¶
- taichi.lang.jkl¶
- taichi.lang.ijkl¶
- taichi.lang.cfg¶
- taichi.lang.x86_64¶
The x64 CPU backend.
- taichi.lang.x64¶
The X64 CPU backend.
- taichi.lang.arm64¶
The ARM CPU backend.
- taichi.lang.cuda¶
The CUDA backend.
- taichi.lang.metal¶
The Apple Metal backend.
- taichi.lang.opengl¶
The OpenGL backend. OpenGL 4.3 required.
- taichi.lang.cc¶
- taichi.lang.wasm¶
The WebAssembly backend.
- taichi.lang.vulkan¶
The Vulkan backend.
- taichi.lang.dx11¶
The DX11 backend.
- taichi.lang.gpu¶
A list of GPU backends supported on the current system.
When this is used, Taichi automatically picks the matching GPU backend. If no GPU is detected, Taichi falls back to the CPU backend.
- taichi.lang.cpu¶
A list of CPU backends supported on the current system.
When this is used, Taichi automatically picks the matching CPU backend.
- taichi.lang.timeline_clear¶
- taichi.lang.timeline_save¶
- taichi.lang.type_factory_¶
- taichi.lang.print_kernel_profile_info(mode='count')¶
Print the profiling results of Taichi kernels.
To enable this profiler, set
kernel_profiler=True
inti.init()
.'count'
mode: print the statistics (min,max,avg time) of launched kernels,'trace'
mode: print the records of launched kernels with specific profiling metrics (time, memory load/store and core utilization etc.), and defaults to'count'
.- Parameters
mode (str) – the way to print profiling results.
Example:
>>> import taichi as ti >>> ti.init(ti.cpu, kernel_profiler=True) >>> var = ti.field(ti.f32, shape=1) >>> @ti.kernel >>> def compute(): >>> var[0] = 1.0 >>> compute() >>> ti.print_kernel_profile_info() >>> # equivalent calls : >>> # ti.print_kernel_profile_info('count') >>> ti.print_kernel_profile_info('trace')
Note
Currently the result of KernelProfiler could be incorrect on OpenGL backend due to its lack of support for ti.sync().
For advanced mode of KernelProfiler, please visit https://docs.taichi.graphics/docs/lang/articles/misc/profiler#advanced-mode.
- taichi.lang.query_kernel_profile_info(name)¶
Query kernel elapsed time(min,avg,max) on devices using the kernel name.
To enable this profiler, set kernel_profiler=True in ti.init.
- Parameters
name (str) – kernel name.
- Returns
with member variables(counter, min, max, avg)
- Return type
KernelProfilerQueryResult (class)
Example:
>>> import taichi as ti >>> ti.init(ti.cpu, kernel_profiler=True) >>> n = 1024*1024 >>> var = ti.field(ti.f32, shape=n) >>> @ti.kernel >>> def fill(): >>> for i in range(n): >>> var[i] = 0.1 >>> fill() >>> ti.clear_kernel_profile_info() #[1] >>> for i in range(100): >>> fill() >>> query_result = ti.query_kernel_profile_info(fill.__name__) #[2] >>> print("kernel excuted times =",query_result.counter) >>> print("kernel elapsed time(min_in_ms) =",query_result.min) >>> print("kernel elapsed time(max_in_ms) =",query_result.max) >>> print("kernel elapsed time(avg_in_ms) =",query_result.avg)
Note
[1] To get the correct result, query_kernel_profile_info() must be used in conjunction with clear_kernel_profile_info().
[2] Currently the result of KernelProfiler could be incorrect on OpenGL backend due to its lack of support for ti.sync().
- taichi.lang.clear_kernel_profile_info()¶
Clear all KernelProfiler records.
- taichi.lang.kernel_profiler_total_time()¶
Get elapsed time of all kernels recorded in KernelProfiler.
- Returns
total time in second.
- Return type
time (float)
- taichi.lang.set_kernel_profiler_toolkit(toolkit_name='default')¶
Set the toolkit used by KernelProfiler.
Currently, we only support toolkits:
'default'
and'cupti'
.- Parameters
toolkit_name (str) – string of toolkit name.
- Returns
whether the setting is successful or not.
- Return type
status (bool)
Example:
>>> import taichi as ti >>> ti.init(arch=ti.cuda, kernel_profiler=True) >>> x = ti.field(ti.f32, shape=1024*1024) >>> @ti.kernel >>> def fill(): >>> for i in x: >>> x[i] = i >>> ti.set_kernel_profiler_toolkit('cupti') >>> for i in range(100): >>> fill() >>> ti.print_kernel_profile_info() >>> ti.set_kernel_profiler_toolkit('default') >>> for i in range(100): >>> fill() >>> ti.print_kernel_profile_info()
- taichi.lang.set_kernel_profile_metrics(metric_list=default_cupti_metrics)¶
Set metrics that will be collected by the CUPTI toolkit.
- Parameters
metric_list (list) – a list of
CuptiMetric()
instances, default value:default_cupti_metrics
.
Example:
>>> import taichi as ti >>> ti.init(kernel_profiler=True, arch=ti.cuda) >>> ti.set_kernel_profiler_toolkit('cupti') >>> num_elements = 128*1024*1024 >>> x = ti.field(ti.f32, shape=num_elements) >>> y = ti.field(ti.f32, shape=()) >>> y[None] = 0 >>> @ti.kernel >>> def reduction(): >>> for i in x: >>> y[None] += x[i] >>> # In the case of not pramater, Taichi will print its pre-defined metrics list >>> ti.get_predefined_cupti_metrics() >>> # get Taichi pre-defined metrics >>> profiling_metrics = ti.get_predefined_cupti_metrics('shared_access') >>> global_op_atom = ti.CuptiMetric( >>> name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum', >>> header=' global.atom ', >>> format=' {:8.0f} ') >>> # add user defined metrics >>> profiling_metrics += [global_op_atom] >>> # metrics setting will be retained until the next configuration >>> ti.set_kernel_profile_metrics(profiling_metrics) >>> for i in range(16): >>> reduction() >>> ti.print_kernel_profile_info('trace')
Note
Metrics setting will be retained until the next configuration.
- taichi.lang.collect_kernel_profile_metrics(metric_list=default_cupti_metrics)¶
Set temporary metrics that will be collected by the CUPTI toolkit within this context.
- Parameters
metric_list (list) – a list of
CuptiMetric()
instances, default value:default_cupti_metrics
.
Example:
>>> import taichi as ti >>> ti.init(kernel_profiler=True, arch=ti.cuda) >>> ti.set_kernel_profiler_toolkit('cupti') >>> num_elements = 128*1024*1024 >>> x = ti.field(ti.f32, shape=num_elements) >>> y = ti.field(ti.f32, shape=()) >>> y[None] = 0 >>> @ti.kernel >>> def reduction(): >>> for i in x: >>> y[None] += x[i] >>> # In the case of not pramater, Taichi will print its pre-defined metrics list >>> ti.get_predefined_cupti_metrics() >>> # get Taichi pre-defined metrics >>> profiling_metrics = ti.get_predefined_cupti_metrics('device_utilization') >>> global_op_atom = ti.CuptiMetric( >>> name='l1tex__t_set_accesses_pipe_lsu_mem_global_op_atom.sum', >>> header=' global.atom ', >>> format=' {:8.0f} ') >>> # add user defined metrics >>> profiling_metrics += [global_op_atom] >>> # metrics setting is temporary, and will be clear when exit from this context. >>> with ti.collect_kernel_profile_metrics(profiling_metrics): >>> for i in range(16): >>> reduction() >>> ti.print_kernel_profile_info('trace')
Note
The configuration of the
metric_list
will be clear when exit from this context.
- taichi.lang.print_memory_profile_info()¶
Memory profiling tool for LLVM backends with full sparse support.
This profiler is automatically on.
- taichi.lang.extension¶
- taichi.lang.is_extension_supported(arch, ext)¶
Checks whether an extension is supported on an arch.
- Parameters
arch (taichi_core.Arch) – Specified arch.
ext (taichi_core.Extension) – Specified extension.
- Returns
Whether ext is supported on arch.
- Return type
bool
- taichi.lang.reset()¶
Resets Taichi to its initial state.
This would destroy all the fields and kernels.
- taichi.lang.prepare_sandbox()¶
Returns a temporary directory, which will be automatically deleted on exit. It may contain the taichi_core shared object or some misc. files.
- taichi.lang.check_version()¶
- taichi.lang.try_check_version()¶
- taichi.lang.init(arch=None, default_fp=None, default_ip=None, _test_mode=False, enable_fallback=True, **kwargs)¶
Initializes the Taichi runtime.
This should always be the entry point of your Taichi program. Most importantly, it sets the backend used throughout the program.
- Parameters
default_fp (Optional[type]) – Default floating-point type.
default_ip (Optional[type]) – Default integral type.
**kwargs –
Taichi provides highly customizable compilation through
kwargs
, which allows for fine grained control of Taichi compiler behavior. Below we list some of the most frequently used ones. For a complete list, please check out https://github.com/taichi-dev/taichi/blob/master/taichi/program/compile_config.h.cpu_max_num_threads
(int): Sets the number of threads used by the CPU thread pool.debug
(bool): Enables the debug mode, under which Taichi does a few more things like boundary checks.print_ir
(bool): Prints the CHI IR of the Taichi kernels.packed
(bool): Enables the packed memory layout. See https://docs.taichi.graphics/lang/articles/advanced/layout.
- taichi.lang.no_activate(*args)¶
- taichi.lang.block_local(*args)¶
Hints Taichi to cache the fields and to enable the BLS optimization.
Please visit https://docs.taichi.graphics/lang/articles/advanced/performance for how BLS is used.
- Parameters
*args (List[Field]) – A list of sparse Taichi fields.
- taichi.lang.mesh_local(*args)¶
- taichi.lang.cache_read_only(*args)¶
- taichi.lang.assume_in_range(val, base, low, high)¶
- taichi.lang.loop_unique(val, covers=None)¶
- taichi.lang.parallelize¶
- taichi.lang.serialize¶
- taichi.lang.vectorize¶
- taichi.lang.bit_vectorize¶
- taichi.lang.block_dim¶
- taichi.lang.global_thread_idx¶
- taichi.lang.mesh_patch_idx¶
- taichi.lang.Tape(loss, clear_gradients=True)¶
Return a context manager of
TapeImpl
. The context manager would catching all of the callings of functions that decorated bykernel()
orgrad_replaced()
under with statement, and calculate all the partial gradients of a given loss variable by calling all of the gradient function of the callings caught in reverse order while with statement ended.See also
kernel()
andgrad_replaced()
for gradient functions.- Parameters
loss (
Expr
) – The loss field, which shape should be ().clear_gradients (Bool) – Before with body start, clear all gradients or not.
- Returns
The context manager.
- Return type
Example:
>>> @ti.kernel >>> def sum(a: ti.float32): >>> for I in ti.grouped(x): >>> y[None] += x[I] ** a >>> >>> with ti.Tape(loss = y): >>> sum(2)
- taichi.lang.clear_all_gradients()¶
Set all fields’ gradients to 0.
- taichi.lang.benchmark(_func, repeat=300, args=())¶
- taichi.lang.benchmark_plot(fn=None, cases=None, columns=None, column_titles=None, archs=None, title=None, bars='sync_vs_async', bar_width=0.4, bar_distance=0, left_margin=0, size=(12, 8))¶
- taichi.lang.stat_write(key, value)¶
- taichi.lang.is_arch_supported(arch, use_gles=False)¶
Checks whether an arch is supported on the machine.
- Parameters
arch (taichi_core.Arch) – Specified arch.
use_gles (bool) – If True, check is GLES is available otherwise check if GLSL is available. Only effective when arch is ti.opengl. Default is False.
- Returns
Whether arch is supported on the machine.
- Return type
bool
- taichi.lang.adaptive_arch_select(arch, enable_fallback, use_gles)¶
- taichi.lang.get_host_arch_list()¶