

Instead of finding the architecture directly, HyperNEAT finds a single function that encodes the entire network. The values of the neuron weights, the types of activation functions, and the number of neurons can be optimized by breeding and mutating different species of neural networks. Neuroevolution of augmenting topologies (NEAT) 4 uses genetic algorithms to optimize the structure of neural networks. For example, they can be used to optimize the number of neurons in each layer or the depth of the neural network. Genetic algorithms excel at optimizing discrete variables. The field of automated machine learning 1, 2, 3 solves the problem by automatically finding machine learning models using genetic algorithms, neural networks and its combination with probabilistic and clustering algorithms. However, finding the optimal model by hand is a daunting task due to the virtually infinite number of possibilities on model and the corresponding parameter selection. The goal of most machine learning algorithms is to find the optimal model for a specific problem. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance \(F_\) epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems.
