LTN broadcasting
LTN predicate case
In LTNtorch, when a predicate (ltn.core.Predicate
), function (ltn.core.Function
), or connective (ltn.core.Connective
) is
called, the framework automatically performs the broadcasting of the inputs.
To make a simple example, assume that we have two variables, and , with the following groundings:
Variable has two individuals with three features each, while variable has three individuals with four
features each.
Now, let us assume that we have a binary predicate , grounded as
. is a learnable
predicate which maps from to . In the notation, is a neural network,
parametrized by , with 7 input neurons, some hidden layers, and 1 output neuron. At the last layer has been applied
a logistic function to assure the output to be in the range . By doing so, the output of
can be interpreted as fuzzy truth value.
Now, suppose that we want to compute . LTNtorch automatically broadcasts the two variables before
computing the predicate. After the broadcasting, we will have the following inputs for our predicate:
for , and for .
Now, it is possible to observe that if we concatenate these two tensors on the first dimension (torch.cat([x, y], dim=1)), we obtain the following input for our predicate:
.
This tensor contains all the possible combinations of the individuals of
the two variables, that are 6. After the computation of the predicate, LTNtorch organizes the output in a tensor , where
the first dimension is related with variable , while the second dimension with variable .
In there will be the result of the evaluation of on the first individual of
, namely , and first individual of , namely , in there will be the result of the evaluation of on the first individual of
, namely , and second individual of , namely , and so forth.
To conclude this note, in LTNtorch, the output of predicates, functions, connectives, and quantifiers are
LTNObject instances. In the case of our example, the output of predicate is
an LTNObject with the following attributes:
value ;
free_vars = [‘x’, ‘y’].
Note that we have analyzed just an atomic formula (predicate) in this scenario. Since the variables appearing in the formula are not quantified, the
free variables in the output are both and . If instead of we had to compute ,
the free_vars attribute would have been equal to [‘y’]. Finally, if we had to compute ,
the free_vars attribute would have been an empty list.
LTN function case
The same scenario explained above can be applied to an LTN function (ltn.core.Function
) instead of an LTN predicate (ltn.core.Predicate
). Suppose we have the same
variables, and , with the same groundings, and .
Then, suppose we have a 2-ary (2 inputs) logical function , grounded as
.
In this case, is a neural network, parametrized by , with 7 input neurons, some hidden layers, and
five output neurons. In other words, is a learnable function which maps from individuals in to individuals in .
Note that, in this case, we do not have applied a logistic function to the output. In fact, logical functions do not have
the constraint of having outputs in the range .
LTNtorch applies the same broadcasting that we have seen above to the inputs of function . The only difference is
on how the output is organized. In the case of an LTN function, the output is organized in a tensor where the first
dimensions are related with the variables given in input, while the remaining dimensions are related with the features of the individuals in output.
In our scenario, the output of is a tensor . The first
dimension is related with variable , the second dimension with variable , while the third dimension with the
features of the individuals in output. In there will be the result of the evaluation of on the first individual of
, namely , and first individual of , namely , in there will be the result of the evaluation of on the first individual of
, namely , and second individual of , namely , and so forth.
LTN connective case
LTNtorch applies the LTN broadcasting also before computing a logical connective. To make the concept clear, let us make
a simple example.
Suppose that we have variables , , , and , with groundings:
, namely contains two individuals in ;
, namely contains four individuals in ;
, namely contains three individuals in ;
, namely contains six individuals in .
Then, suppose that we have two binary predicates, and . maps from to
, while maps from to .
Suppose now that we want to compute the formula: . In order to evaluate this formula, LTNtorch
follows the following procedure:
it computes the result of the atomic formula , which is a tensor . Note that before the computation of , LTNtorch performed the LTN broadcasting of variables and ;
it computes the result of the atomic formula , which is a tensor . Note that before the computation of , LTNtorch performed the LTN broadcasting of variables and ;
it performs the LTN broadcasting of and ;
it applies the fuzzy conjunction . The result is a tensor .
Notice that the output of a logical connective is always wrapped into an LTN object, like it happens for predicates and functions.
In this simple example, the LTNObject produced by the fuzzy conjunction has the following attributes:
Notice that free_vars contains the labels of all the variables appearing in . This is due
to the fact that all the variables are free in the formula since are not quantified by any logical quantifier. Notice also
that has four dimensions, one for each variable appearing in the formula. These dimensions can be
indexed to retrieve the evaluation of on a specific combination of individuals of .
For example, contains the evaluation of on the first individuals
of all the variables, while contains the evaluation of on the first individuals
of and second individual of .