After the neural network modifies the weights, will the new input value be used next time?

Updated on technology 2024-08-08
13 answers
  1. Anonymous users2024-02-15

    Yes, train round by round in sample order, adjusting the weights. All samples are counted. Samples are taken in order, substituted into the BP algorithm, and the weights are adjusted.

    There are also some algorithms that are random, and the order in which the samples come in is different each time, but still all the samples have to participate.

    In fact, there are two methods, one is that in the standard BP algorithm, each input sample must be passed back and the weight must be adjusted, this method of rotating each sample is called "single-shot training". Because single-shot training follows the principle of "standardism" and only adjusts for the error generated by each sample, it is inevitable that the number of training times will increase and the convergence speed will be too slow. Therefore, there is another method, which is to calculate the total error of the network after all the sample inputs, and then adjust the weight according to the total error, and this batch processing method of cumulative error is called "batch training" or "periodic training".

    When the number of samples is large, batch training converges faster than single-shot training.

  2. Anonymous users2024-02-14

    What you said should be the training method, and the weight of the neural network is not given by artificially. Instead, it is trained with a training set (including input and output), and training with a training set is called an epoch, which generally requires many epochs, and the purpose is to make the error between the target and the training result (generally using the mean square error.

  3. Anonymous users2024-02-13

    The weights of a neural network are obtained through the training of the network. If you use MATLAB, don't set it yourself, it will be automatically assigned after newff. It is also possible to manual:

    。Generally speaking, the input is normalized, so w and b take the random number from 0 to 1. The purpose of the neural network's weighting determination is to allow the neural network to learn useful information during training, which means that the parameter gradient should not be 0.

    There are two necessary conditions for parameter initialization:

    1. There will be no saturation phenomenon in each activation layer, for example, for the sigmoid activation function, the initialization value should not be too large or too small, resulting in falling into its saturation zone.

    2. Each activation value is not 0, if the output of the activation layer is zero, that is, the input of the next convolutional layer is zero, so the partial derivation of the weight of this convolutional layer is zero, resulting in a gradient of 0.

  4. Anonymous users2024-02-12

    (1) Initially, each weight is generated by a random number function, and the range of values is between [-1,1].

    2) During the operation, the gradient of the mean square error is obtained by the BP algorithm, and then the weight of the BP network is adjusted. For example: w(i,j,k+1)=w(i,j,k)+delta(e(i,j))).

  5. Anonymous users2024-02-11

    The weights are automatically generated at the beginning, and then the neural network is trained based on the training data, and during the training process, the neural network automatically adjusts the weights according to the output error to meet the output requirements. So the weights are related to the training data, you can try it in the MATLAB Neural Network Toolbox, and you can get the interface by typing in NNTOOL in the MATLAB command!

  6. Anonymous users2024-02-10

    The weights are obtained by training the network, and are the training of the desired input and output data.

  7. Anonymous users2024-02-09

    Are you using genetic algorithms to optimize weights and thresholds?

    I didn't know you x's ** came from? So I don't know how you determine the initial weights and thresholds.

    However, when we usually write programs, these values are given randomly.

  8. Anonymous users2024-02-08

    Landlord, how do you understand it?

  9. Anonymous users2024-02-07

    Input to Hidden Layer Weight: w1= Hidden Layer Threshold: b1= Hidden Layer to Output Layer Weight: w2=; Output Layer Threshold: b2=

  10. Anonymous users2024-02-06

    The initial connection weight is related to the speed of network training and the convergence rate, and in basic neural networks, this weight is set randomly. In the process of network training, adjustments are made in the direction of error reduction. In view of the shortcomings of the randomness uncertainty of this weight, some people put forward the idea of initializing the initial weights and thresholds of BP with genetic algorithms, proposed a genetic neural network model, and predicted that the next generation of neural networks will be genetic neural networks.

    Hope it helps. You can check out the literature on this.

  11. Anonymous users2024-02-05

    The output method of the trained weights and thresholds is as follows:

    Enter the weight to the hidden layer: w1=

    Hidden layer threshold: theta1=

    Hidden layer to output layer weight: w2=;

    Output Layer Threshold: theta2=

  12. Anonymous users2024-02-04

    The trained network will get a definite weight and threshold, substitute it back into the original model (mathematical expression), and then find a partial derivative of the input variable for this model to get the effect of the input on the output.

  13. Anonymous users2024-02-03

    The certainty should be the training method, and the weight of the neural network is not given by artificially. Instead, it is trained with a training set (including input and output), and training with a training set is called an epoch, which generally requires many epochs, in order to make the error between the target and the training result (generally the mean square error) small to a given threshold. There is a supervised learning method and an unsupervised learning method.

Related questions
4 answers2024-08-08

It is generally accepted that the thinking of the human brain is divided into three basic ways: abstract (logical) thinking, visual (intuitive) thinking, and inspired (epiphany) thinking. >>>More

4 answers2024-08-08

Jiu Jiu Candy is so detailed, I wanted to say something, but it seems that I don't need to say anything. >>>More

8 answers2024-08-08

Regardless of the type of artificial neural network, they all share the characteristics of massively parallel processing, distributed storage, elastic topology, high redundancy, and nonlinear operations. Therefore, it has a very high computing speed, strong associative ability, strong adaptability, strong fault tolerance and self-organization ability. These features and capabilities form the technical basis for artificial neural networks to simulate intelligent activities, and have gained important applications in a wide range of fields. >>>More

11 answers2024-08-08

Differences between training functions and adaptive learning functions: >>>More

2 answers2024-08-08

The method of classifying data with SPSS neural network is as follows: >>>More