Artificial neural networks having attention-based selective plasticity and methods of training the same
11210559 · 2021-12-28
Assignee
Inventors
- Soheil Kolouri (Agoura Hills, CA, US)
- Nicholas A. Ketz (Madison, WI, US)
- Praveen K. Pilly (Tarzana, CA, US)
- Charles E. Martin (Thousand Oaks, CA, US)
- Michael D. Howard (Westlake Village, CA)
Cpc classification
G06F18/214
PHYSICS
G06V10/774
PHYSICS
G06V10/454
PHYSICS
G06F18/217
PHYSICS
G06V20/58
PHYSICS
International classification
Abstract
An autonomous navigation system for a vehicle includes a controller configured to control the vehicle, sensors configured to detect objects in a path of the vehicle, nonvolatile memory including an artificial neural network configured to classify the objects detected by the sensors, and a processor. The artificial neural network includes a series of neurons in each of an input layer, at least one hidden layer, and an output layer. The memory includes instructions which, when executed by the processor, cause the processor to train the artificial neural network on a first task, identify, utilizing a contrastive excitation backpropagation algorithm, important neurons for the first task, identify, utilizing a learning algorithm, important synapses between the neurons for the first task based on the important neurons identified, and rigidify the important synapses to achieve selective plasticity of the series of neurons in the artificial neural network.
Claims
1. An autonomous system for a vehicle, the autonomous system comprising: a controller configured to control the vehicle; a plurality of sensors configured to detect objects in a path of the vehicle; nonvolatile memory having an artificial neural network stored therein configured to classify the objects detected by the plurality of sensors, the artificial neural network comprising a plurality of neurons in each of an input layer, at least one hidden layer, and an output layer; and a processor, wherein the nonvolatile memory includes instructions which, when executed by the processor, cause the processor to: train the artificial neural network on a first task; identify, utilizing a contrastive excitation backpropagation algorithm, important neurons of the plurality of neurons for the first task; identify, utilizing a learning algorithm, important synapses between the plurality of neurons for the first task based on the important neurons identified; and rigidify the identified important synapses to achieve selective plasticity of the plurality of neurons in the artificial neural network when being trained on one or more new tasks.
2. The autonomous system of claim 1, wherein the instructions, when executed by the processor, further cause the processor to train the artificial neural network on a second task different than the first task.
3. The autonomous system of claim 2, wherein the instructions, when executed by the processor, further cause the processor to: send at least one input of the second task to the input layer; generate, at the output layer of the plurality of layers, at least one output based on the at least one input; generate a reward based on a comparison between the at least one output and a desired output; and modify weights of the synapses based on the reward.
4. The autonomous system of claim 3, wherein, during training of the artificial neural network on the second task, the weights of the important synapses remain constant.
5. An autonomous system for a vehicle, the autonomous system comprising: a controller configured to control the vehicle; a plurality of sensors configured to detect objects in a path of the vehicle; nonvolatile memory having an artificial neural network stored therein configured to classify the objects detected by the plurality of sensors, the artificial neural network comprising a plurality of neurons in each of an input layer, at least one hidden layer, and an output layer; and a processor, wherein the nonvolatile memory includes instructions which, when executed by the processor, cause the processor to: train the artificial neural network on a first task; identify, utilizing a contrastive excitation backpropagation algorithm, important neurons of the plurality of neurons for the first task; identify, utilizing a learning algorithm, important synapses between the plurality of neurons for the first task based on the important neurons identified; rigidify the identified important synapses to achieve selective plasticity of the plurality of neurons in the artificial neural network when being trained on one or more new tasks; train the artificial neural network on a second task different than the first task; send at least one input of the second task to the input layer of the plurality of layers; generate, at the output layer of the plurality of layers, at least one output based on the at least one input; generate a reward based on a comparison between the at least one output and a desired output; and modify weights of the synapses based on the reward, wherein, during training of the artificial neural network on the second task, the weights of the important synapses remain constant, and wherein the learning algorithm to identify important synapses is a Hebbian learning algorithm as follows:
β.sub.ji.sup.l=β.sub.ji.sup.l+P(a.sub.j.sup.l(x.sub.n))P(a.sub.i.sup.l+1(x.sub.n)), where β.sub.ji.sup.l is a synaptic importance parameter, x.sub.n is an input image, a.sub.j.sup.l is a j′th neuron in an l′th layer of the artificial neural network, a.sub.i.sup.l+1 is an i′th neuron in layer l+1 of the artificial neural network, and P is a probability.
6. An autonomous system for a vehicle, the autonomous system comprising: a controller configured to control the vehicle; a plurality of sensors configured to detect objects in a path of the vehicle; nonvolatile memory having an artificial neural network stored therein configured to classify the objects detected by the plurality of sensors, the artificial neural network comprising a plurality of neurons in each of an input layer, at least one hidden layer, and an output layer; and a processor, wherein the nonvolatile memory includes instructions which, when executed by the processor, cause the processor to: train the artificial neural network on a first task; identify, utilizing a contrastive excitation backpropadation algorithm, important neurons of the plurality of neurons for the first task; identify, utilizing a learning algorithm, important synapses between the plurality of neurons for the first task based on the important neurons identified; rigidify the identified important synapses to achieve selective plasticity of the plurality of neurons in the artificial neural network when being trained on one or more new tasks; train the artificial neural network on a second task different than the first task; send at least one input of the second task to the input layer of the plurality of layers; generate, at the output layer of the plurality of layers, at least one output based on the at least one input; generate a reward based on a comparison between the at least one output and a desired output; and modify weights of the synapses based on the reward, wherein, during training of the artificial neural network on the second task, the weights of the important synapses remain constant, and wherein the learning algorithm to identify important synapses is Oja's learning rule as follows:
γ.sub.ji.sup.l=γ.sub.ji.sup.l+∈(P.sub.c(f.sub.j.sup.(l−1))P.sub.c(f.sub.j.sup.l)−P.sub.c(f.sub.i.sup.l).sup.2γ.sub.ji.sup.l), where i and j are neurons, l is a layer of the artificial neural network, P.sub.c is a probability, γ.sub.ji.sup.l is the importance of the synapse between the neurons f.sub.j.sup.(l−1) and f.sub.i.sup.l for the first task, ∈ is the rate of Oja's learning rule, and P.sub.c is a probability.
7. The autonomous system of claim 6, wherein the instructions which, when executed by the processor, further cause the processor to update a loss function of the artificial neural network as follows:
(θ)=
.sub.B(θ)+λΣ.sub.kγ.sub.k(θ.sub.k−θ*.sub.A,k).sup.2, where
(θ) is the loss function,
.sub.B(θ) is an original loss function for learning a second task different than the first task, λ is the regularization coefficient, γ.sub.k is the synaptic importance parameter of Oja's learning rule, θ.sub.k is the synaptic weights, and θ*.sub.A,k are the optimized synaptic weights for performing the first task.
8. A non-transitory computer-readable storage medium having software instructions stored therein, which, when executed by a processor, cause the processor to: train an artificial neural network on a first task; identify, utilizing a contrastive excitation backpropagation algorithm, important neurons of the artificial neural network for the first task; identify, utilizing a learning algorithm, important synapses between the important neurons; and rigidify the identified important synapses to achieve selective plasticity of the artificial neural network when being trained on one or more new tasks.
9. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the processor, further cause the processor to train the artificial neural network on a second task different than the first task.
10. The non-transitory computer-readable storage medium of claim 9, wherein the instructions, when executed by the processor, further cause the processor to: send at least one input of the second task to an input layer of the artificial neural network; receive at least one output from an output layer of the artificial neural network based on the at least one input; generate a reward based on a comparison between at least one output and a desired output; and modify weights of the synapses based on the reward.
11. The non-transitory computer-readable storage medium of claim 10, wherein, during training of the artificial neural network on the second task, the weights of the important synapses remain constant.
12. The non-transitory computer-readable storage medium of claim 11, wherein the learning algorithm is a Hebbian learning algorithm as follows:
β.sub.ji.sup.l=β.sub.ji.sup.l+P(a.sub.j.sup.l(x.sub.n))P(a.sub.i.sup.l+1(x.sub.n)), where β.sub.ji.sup.l is a synaptic importance parameter, x.sub.n is an input image, a.sub.j.sup.l is a j′th neuron in an l′th layer of the artificial neural network, a.sub.i.sup.l+1 is an i′th neuron in layer l+1 of the artificial neural network, and P is a probability.
13. The non-transitory computer-readable storage medium of claim 11, wherein the learning algorithm is Oja's learning rule as follows:
γ.sub.ji.sup.l=γ.sub.ji.sup.l+∈(P.sub.c(f.sub.j.sup.(l−1))P.sub.c(f.sub.j.sup.l)−P.sub.c(f.sub.i.sup.l).sup.2γ.sub.ji.sup.l), where i and j are neurons, l is a layer of the artificial neural network, P.sub.c is a probability, γ.sub.ji.sup.l is the importance of the synapse between the neurons f.sub.j.sup.(l−1) and f.sub.i.sup.l for the first task, ∈ is the rate of Oja's learning rule, and P.sub.c is a probability.
14. The non-transitory computer-readable storage medium of claim 13, wherein the instructions which, when executed by the processor, further cause the processor to update a loss function of the artificial neural network as follows: (θ) is the loss function,
.sub.B(θ) is an original loss function for learning a second task different than the first task, λ is the regularization coefficient, γ.sub.k is the synaptic importance parameter of Oja's learning rule, θ.sub.k is the synaptic weights, and θ*.sub.A,k are the optimized synaptic weights for performing the first task.
15. A method of training an artificial neural network having a plurality of layers, each layer of the plurality of layers comprising a plurality of neurons, and at least one weight matrix encoding connection weights between neurons in successive layers of the plurality of layers, the method comprising: training the artificial neural network on a first task; identifying, utilizing contrastive excitation backpropagation, important neurons for the first task; identifying, utilizing a learning algorithm, important synapses for the first task based on the important neurons identified; and rigidifying the identified important synapses to achieve selective plasticity of the plurality of neurons in the artificial neural network when being trained on one or more new tasks.
16. The method of claim 15, further comprising training the artificial neural network on a second task different than the first task, the training of the artificial neural network on the second task comprising: sending at least one input of the second task to an input layer of the plurality of layers; generating, at an output layer of the plurality of layers, at least one output based on the at least one input; generating a reward based on a comparison between the at least one output and a desired output; and modifying the connection weights based on the reward.
17. The method of claim 16, wherein, during the training of the artificial neural network on the second task, the weights of the important synapses remain constant.
18. The method of claim 17, wherein the learning algorithm is a Hebbian learning algorithm as follows:
β.sub.ji.sup.l=β.sub.ji.sup.l+P(a.sub.j.sup.l(x.sub.n))P(a.sub.i.sup.l+1(x.sub.n)), where β.sub.ji.sup.l is a synaptic importance parameter, x.sub.n is an input image, a.sub.j.sup.l is a j′th neuron in an l′th layer of the artificial neural network, a.sub.i.sup.i+1 is an i′th neuron in layer l+1 of the artificial neural network, and P is a probability.
19. The method of claim 17, wherein the learning algorithm is Oja's learning rule as follows:
γ.sub.ji.sup.l=γ.sub.ji.sup.l+∈(P.sub.c(f.sub.j.sup.(l−1))P.sub.c(f.sub.j.sup.l)−P.sub.c(f.sub.i.sup.l).sup.2γ.sub.ji.sup.l), where i and j are neurons, l is a layer of the artificial neural network, P.sub.c is a probability, γ.sub.ji.sup.l is the importance of the synapse between the neurons f.sub.j.sup.(l−1) and f.sub.i.sup.l for the first task, ∈ is the rate of Oja's learning rule, and P.sub.c is a probability.
20. The method of claim 19, further comprising updating a loss function of the artificial neural network as follows:
(θ)=
.sub.B(θ)+λΣ.sub.kγ.sub.k(θ.sub.k−θ*.sub.A,k).sup.2 where
(θ) is the loss function,
.sub.B(θ) is an original loss function for learning a second task different than the first task, λ is the regularization coefficient, γ.sub.k is the synaptic importance parameter of Oja's learning rule, θ.sub.k is the synaptic weights, and θ*.sub.A,k are the optimized synaptic weights for performing the first task.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other features and advantages of embodiments of the present disclosure will become more apparent by reference to the following detailed description when considered in conjunction with the following drawings. In the drawings, like reference numerals are used throughout the figures to reference like features and components. The figures are not necessarily drawn to scale.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) The present disclosure is directed to various embodiments of artificial neural networks and methods of training artificial neural networks utilizing selective plasticity such that the artificial neural network can learn new tasks (e.g., road detection during nighttime) without forgetting old tasks (e.g., road detection during daytime). The selective plasticity of the present disclosure is achieved by utilizing a contrastive excitation backpropagation (c-EBP) framework, which is an attentional mechanism, that identifies neurons that are significant for solving a particular task, and by utilizing Oja's learning rule to rigidify the synaptic connections between these significant neurons such that the rigidified synaptic connections do not change during learning of a new task. In this manner, the artificial neural networks of the present disclosure utilize selective plasticity of the synapses to maintain previously learned tasks while learning new tasks and thereby effectively accumulate new knowledge. That is, the artificial neural networks of the present disclosure utilize selective plasticity to learn new tasks without suffering from catastrophic forgetting, which occurs with related art artificial neural networks that employ uniform plasticity of the synapses.
(11)
(12) In the embodiment illustrated in
(13) In the illustrated embodiment, the method 100 also includes a task 120 of calculating or determining the neurons 202, 204, 206, 208 of the artificial neural network 200 that are significant for the performance of the first task A (i.e., the task 120 includes identifying task-significant neurons 202, 204, 206, 208 in the artificial neural network 200). In one or more embodiments, the task 120 of identifying the task-significant neurons 202, 204, 206, 208 includes performing excitation backpropagation (EBP) to obtain top-down signals that identify the task-significant neurons 202, 204, 206, 208 of the artificial neural network 200. EBP provides a top-down attention model for neural networks that enables generation of task/class-specific attention maps. EBP introduces a back-propagation scheme by extending the idea of winner-take-it-all into a probabilistic setting. In one or more embodiments, the task 120 of calculating the neurons 202, 204, 206, 208 of the artificial neural network 200 that are significant for the performance of the first task may utilize the contrastive version of the EBP algorithm (c-EBP) to make the top-down signal more task-specific. In the EBP formulation, the top-down signal is defined as a function of the probability output.
(14) In one or more embodiments, the task 120 of identifying the task-significant neurons 202, 204, 206, 208 for the performance of the first task A may be performed by defining the relative importance of neuron f.sub.i.sup.(l−1) on the activation of neuron f.sub.i.sup.l as a probability distribution P(f.sub.j.sup.(l−1)), over neurons in layer (l−1), where f.sub.i.sup.l is the i′th neuron in layer l of the artificial neural network 200, where f.sub.i.sup.l=σ(Σ.sub.jiθ.sub.ji.sup.lf.sub.j.sup.(l−1), and where θ.sup.l is the synaptic weights between layers (l−1) and l. The probability distribution P(f.sub.j.sup.(l−1)) can be factored as follows:
P(f.sub.j.sup.l−1)=Σ.sub.iP(f.sub.j.sup.(l−1)|f.sub.i.sup.l)P(f.sub.i.sup.l) (Equation 1)
P(f.sub.j.sup.l) is the Marginal Winning Probability (MWP) for neuron f.sub.i.sup.l. Additionally, in one or more embodiments, the task 120 includes defining the conditional probability P(f.sub.j.sup.(l−1)|f.sub.i.sup.l), as follows:
(15)
where Z.sub.i.sup.(l−1)=(Σ.sub.jf.sub.j.sup.(l−1)θ.sub.ji.sup.l).sup.−1 is a normalization factor such that Σ.sub.iP(f.sub.j.sup.(l−1)|f.sub.i.sup.l)=1. For a given input, x (e.g., an input image), EBP generates a heat-map in the pixel-space with respect to class y by starting with P(f.sub.i.sup.L=y)=1 at the output layer 207 and applying Equation 2 above recursively. The contrastive excitation backpropagation (c-EBP) assigns a hypothetical negative neuron
(16)
where ReLU is the rectified linear function. Additionally, the contrastive-MWP, P.sub.c(f.sub.i.sup.l), indicates the relative importance of neuron f.sub.i.sup.l for specific prediction y. Additionally, the contrastive-MWP, P.sub.c(f.sub.i.sup.l), can be understood as the implicit amount of attention that the artificial neural network 200 pays to neuron f.sub.i.sup.l to predict y.
(17)
(18) With continued reference to
β.sub.ji.sup.l=β.sub.ji.sup.l+P(a.sub.j.sup.l(x.sub.n))P(a.sub.i.sup.l+1(x.sub.n)) (Equation 3)
where a.sub.j.sup.l is the j′th neuron in the l′th layer of the artificial neural network, a.sub.i.sup.l+1 is the i′th neuron in the l+1 layer of the artificial neural network, and P is a probability.
(19) Additionally, in one or more embodiments, the probability distribution for the output layer 207 is set to the one-hot vector of the input label, P(a.sub.j.sup.L(x.sub.n))=y.sub.n.
(20) However, Hebbian learning of importance parameters may suffer from the problem of unbounded growth of the importance parameters. To avoid the problems of Hebbian learning, in one or more embodiments the task 130 of determining the synaptic importance utilizes Oja's learning rule (i.e., Oja's learning algorithm) to calculate the importance, γ.sub.ji.sup.l, of the synapse between the neurons f.sub.j.sup.(l−1) and f.sub.i.sup.l for the first task A as follows:
γ.sub.ji.sup.l=γ.sub.ji.sup.l+∈(P.sub.c(f.sub.j.sup.(l−1))(P.sub.c(f.sub.j.sup.l)−P.sub.c(f.sub.i.sup.l).sup.2γ.sub.ji.sup.l) (Equation 4)
where ∈ is the rate of Oja's learning rule, i and j are neurons, l is a layer of the artificial neural network, and P.sub.c is a probability. The task 130 of updating the importance parameters via Oja's learning rule is performed in an online manner, starting from γ.sub.ji.sup.l=0, during or following the task of updating the artificial neural network 200 via back-propagation during the task 110 of training the artificial neural network 200.
(21) With continued reference to the embodiment illustrated in (θ)=
.sub.B(θ)+λΣ.sub.kγ.sub.k(θ.sub.k−θ*.sub.A,k).sup.2 (Equation 5)
where (θ) is the loss function,
.sub.B(θ) is the original loss function for learning a second task (task B) different than the first task A (i.e., the cross entropy loss), λ is the regularization coefficient, γ.sub.k is the synaptic importance parameter defined in Equation 4 above, and θ.sub.k is the synaptic weights, and θ*.sub.A,k are the optimized synaptic weights for performing task A. In one or more embodiments, the importance parameters may be calculated in an online manner such that there is no need for definition of tasks and the method can adaptively learn the changes in the training data.
(22) In the illustrated embodiment, the method 200 also includes a task 150 of training the artificial neural network 200 on the second task B different than the first task A on which the artificial neural network 200 was trained in task 110. As described above, the artificial neural network 200, following the task 140 of rigidifying the synapses associated with the important neurons, exhibits selective plasticity without catastrophic forgetting when the artificial neural network 200 is trained on the second task B different from the first task A.
(23)
(24)
(25)
(26) As illustrated in
(27)
(28) The methods of the present disclosure may be performed by a processor and/or a processing circuit executing instructions stored in non-volatile memory (e.g., read-only memory (“ROM”), programmable ROM (“PROM”), erasable programmable ROM (“EPROM”), electrically erasable programmable ROM “EEPROM”), flash memory, etc.). The term “processor” or “processing circuit” is used herein to include any combination of hardware, firmware, and software, employed to process data or digital signals. The hardware of a processor may include, for example, application specific integrated circuits (ASICs), general purpose or special purpose central processors (CPUs), digital signal processors (DSPs), graphics processors (GPUs), and programmable logic devices such as field programmable gate arrays (FPGAs). In a processor, as used herein, each function is performed either by hardware configured (i.e., hard-wired) to perform that function, or by more general purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium. A processor may be fabricated on a single printed wiring board (PWB) or distributed over several interconnected PWBs. A processor may contain other processors; for example a processor may include two processors, an FPGA and a CPU, interconnected on a PWB.
(29)
(30) In the illustrated embodiment, the autonomous system 300 includes a memory device 301 (e.g., non-volatile memory, such as read-only memory (“ROM”), programmable ROM (“PROM”), erasable programmable ROM (“EPROM”), electrically erasable programmable ROM “EEPROM”), flash memory, etc.), a processor or a processing circuit 302, a controller 303, and at least one sensor 304. The memory device 301, the processor or processing circuit 302, the controller 303, and the at least one sensor 304 may communicate with each other over a system bus 305. In one or more embodiments in which the autonomous system 300 is configured to control an autonomous or semi-autonomous vehicle, the sensors 304 may be any suitable type or kind of sensors configured to detect objects or situations in a path of the autonomous vehicle, such as one or more cameras, lidars, and/or radars, and the controller 303 may be connected to any suitable vehicle components for controlling the vehicle, such as brakes, the steering column, and/or the accelerator, based on the objects or situations detected by the one or more sensors 304.
(31) In one or more embodiments, the memory device 301 is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to perform each of the tasks described above with reference to the flowchart depicted in
(32) Additionally, in one or more embodiments, the memory device 301 is programmed with an artificial neural network configured to perform one or more tasks for operating or controlling the device into which the autonomous system 300 is installed. In one or more embodiments, the artificial neural network may be stored in an online data storage unit (e.g., in the “cloud”) and accessible by the processor or processing circuit 302.
(33) In one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions (i.e., software) which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to train the artificial neural network to perform a first task A (e.g., semantic segmentation of an image captured by one of the sensors 304, such as a daytime image captured by a camera).
(34) Additionally, in one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to calculate or determine the neurons of the artificial neural network that are significant for the performance of the first task A (i.e., the task-significant neurons in the artificial neural network). In one or more embodiments, the instructions include an EBP or a c-EBP algorithm.
(35) In one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to determining the importance of the synapses between the neurons for the performance of the first task A for which the artificial neural network was trained (i.e., identify attention-based synaptic importance for the performance of the first task A). In one or more embodiments, the instructions for determining the importance of the synapses may be a Hebbian learning algorithm (e.g., Equation 3 above) or Oja's learning algorithm (e.g., Equation 4 above). Additionally, in one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to rigidify the important synapses of the artificial neural network. Rigidifying the important synapses may include causing the weights associated with those important synapses to remain fixed or substantially fixed (i.e., remain constant or substantially constant) when the artificial neural network is trained on one or more new tasks. Alternatively, rigidifying the important synapses may include causing those weights associated with the important synapses not to remain fixed, but to be allocated relatively less plasticity than the synapses that are not important for the performance of the first task A. As described above, rigidifying the synapses associated with the important neurons is configured to cause the artificial neural network to exhibit selective plasticity without catastrophic forgetting. In one or more embodiments, the instructions for rigidifying the important synapses may include an algorithm for regularizing the loss function of the artificial neural network (e.g., Equation 5 above).
(36) Additionally, in one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to train the artificial neural network on a second task B different than the first task A (e.g., semantic segmentation of an image captured by one of the sensors 304, such as a nighttime image captured by a camera). Due to the rigidification of the important synapses of the artificial neural network, the artificial neural network is configured to learn the second task B without catastrophic forgetting of the first task A, as shown, for instance, in
(37) In one or more embodiments, the memory device 301 or the online data storage unit is programmed with instructions which, when executed by the processor or processing circuit 302, cause the processor or processing circuit 302 to operate the controller 303 to control the device 400 in which the autonomous system 300 is incorporated in accordance with the tasks that the artificial neural network is trained to perform. For instance, in one or more embodiments in which the autonomous system 300 is incorporated into an autonomous vehicle (i.e., the device 400 is an autonomous vehicle), the instructions may cause the processor or processing circuit 302 to actuate the controller 303 to control the steering, braking, and or acceleration of the vehicle (e.g., to avoid one or more hazardous objects or conditions classified during semantic segmentation of a daytime driving scene, a nighttime driving scene, or a rainy driving scene captured by the one or more sensors 304).
(38) While this invention has been described in detail with particular references to exemplary embodiments thereof, the exemplary embodiments described herein are not intended to be exhaustive or to limit the scope of the invention to the exact forms disclosed. Persons skilled in the art and technology to which this invention pertains will appreciate that alterations and changes in the described structures and methods of assembly and operation can be practiced without meaningfully departing from the principles, spirit, and scope of this invention, as set forth in the following claims, and equivalents thereof. Additionally, as used herein, the term “substantially,” “about,” “approximately”, “generally” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent deviations in measured or calculated values that would be recognized by those of ordinary skill in the art. Moreover, the tasks described above may be performed in the order described or in any other suitable sequence. Additionally, the methods described above are not limited to the tasks described. Instead, for each embodiment, one or more of the tasks described above may be absent and/or additional tasks may be performed.