Source Code
- HeatCoefficients.c_p_physical(T)
Specific heat as a function of temperature T (in Kelvin).
- HeatCoefficients.c_p_star(theta, delta_T, T_REF, c_p_ref)
Dimensionless specific heat: c_p*(theta, p) = c_p(T) / c_p_ref(p)
- HeatCoefficients.lambda_physical(T)
Thermal conductivity as a function of temperature T (in Kelvin).
- HeatCoefficients.lambda_star(theta, delta_T, T_REF, lambda_ref)
Dimensionless thermal conductivity: lambda*(theta, p) = lambda(T) / lambda_ref(p)
- HeatCoefficients.q_physical(q_star, lambda_ref, delta_T, L)
Rescale dimensionless heat flux to physical units.
- HeatCoefficients.rho_physical(T)
Density as a function of temperature T (in Kelvin).
- HeatCoefficients.rho_star(theta, delta_T, T_REF, rho_ref)
Dimensionless density: rho*(theta, p) = rho(T) / rho_ref(p) theta = (T - p) / delta_T
- FEniCSx.solve_heatequation_dimensionless(T_ICE, T_MELTING, dt, t_end, T_REF, relax_param=None, number_cells=128, rtol=0.0001, max_iter=500)
Training an ANN to learn the transient nonlinear heat equation based on FEniCSx generated data Jan Habscheid Jan.Habscheidœrwth-aachen.de
- ANN.compile_ann(model: Sequential, optimizer, loss: str = 'mse', metrics: list = ['mae', 'mape', 'r2']) Sequential
Compile the ANN to prepare it for training
- Parameters:
model (Sequential) – The ANN
optimizer (Tensorflow optimizer) – the optimizer
loss (str, optional) – Loss to use, by default ‘mse’
metrics (list, optional) – Additional metrics to use, by default [‘mae’, ‘mape’, ‘r2’]
- Returns:
_description_
- Return type:
Sequential
- ANN.create_ann(neurons: list, activation: str = 'relu', output_activation: str = 'linear') Sequential
_summary_
_extended_summary_
- Parameters:
neurons (list) – Number of neurons for each layer
activation (str, optional) – Activation function for hidden nodes, by default ‘relu’
output_activation (str, optional) – Activation function for output, by default ‘linear’
- Returns:
Neural network model
- Return type:
Sequential
- ANN.load_data(skip_starter=1) dict
Load the data, generated with FEniCSx before
Loads the training/testing, validation and extrapolation data. Stores the data in a dictionary
- Parameters:
skip_starter (int, optional) – Skip the first skip_starter time step to overcome the discontinuity, by default 1
- Returns:
Dictionary containing the data
- Return type:
dict
- ANN.plot_sample_prediction(model: Sequential, data: dict, sample: int, scaler_path: str = None)
Plots sample prediction
- Parameters:
model (Sequential) – ANN
data (dict) – Data dictionary
sample (int) – Sample to plot
scaler_path (str, optional) – Path to the scaler, by default None
- ANN.plot_training_history(history, savefig=None)
Plots training loss
_extended_summary_
- Parameters:
history (_type_) – tensorflow history
savefig (_type_, optional) – If not None, determines the path to store the figure, by default None
- ANN.prepare_data(data: dict, scaler_path: str | None = None) dict
Prepare the data for the ANN training
Adds the X_train, X_test, y_train, y_test, X_validate, y_validate, X_extrapolate, y_extrapolate to the data dictionary
- Parameters:
data (dict) – Unprepared data dictionary
scaler_path (str, optional) – Path to the scaler, by default None
- Returns:
Prepared data dictionary
- Return type:
dict
- ANN.train_ann(model: Sequential, data: dict, batch_size: int = 32, epochs: int = 1000, callbacks: list = None, verbose: int = 0) tuple
Trains the ANN
- Parameters:
model (Sequential) – ANN
data (dict) – Data dictionary
batch_size (int, optional) – Batch size, by default 32
epochs (int, optional) – Number of epochs for training, by default 1000
callbacks (list, optional) – Callbacks, by default None
verbose (int, optional) – Whether to print each epoch, by default 0
- Returns:
model, history
- Return type:
tuple
- PINN.Q_calc(T_ICE)
- PINN.T_calc(Q)
- PINN.trainPINN(t_end: float, ntime: int, epochs: int, layer: int, width: int, activation: str) tuple
Train the PINN model for the heat equation with the given parameters.
Disclaimer: The following docstring was written with generative AI and may contain error !!!
The trainPINN function is designed to train a Physics-Informed Neural Network (PINN) model for solving the heat equation. The function takes several parameters that define the training process and the structure of the neural network. Here’s an extended summary of the function:
Function Workflow
- Initialization:
The function starts by recording the start time for the training process. It calculates the time step size dt based on t_end and ntime. It sets a flag Training to False by default, which prevents retraining if the model is already trained. It creates a directory for saving the model if it doesn’t already exist and sets Training to True.
- Domain and Boundary Conditions:
Defines the spatial domain as a rectangle. Discretizes the time domain into ntime steps. Defines boundary conditions for the left and right boundaries of the spatial domain.
- Training Grid:
Creates a training grid for spatial points and boundary condition axis. Initializes the previous solution values to zero and sets up an interpolator for the previous solution.
- Model Training:
Initializes lists to store trained models, loss history, and training state. Iterates over each time step to train the model: Defines the PDE residual function, which includes the interpolation of the previous solution and the computation of the explicit time derivative. Sets up the dataset for the current time step. Defines and trains the neural network model using the specified parameters. Saves the model during training if Training is True, otherwise restores a pretrained model. Updates the previous solution values and interpolator for the next time step. Appends the trained model to the list of trained models.
- Finalization:
Records the end time and calculates the total training time. Saves the training time to a text file. Returns the trained models, loss history, and training state. Example Usage This function is a comprehensive implementation for training a PINN model for the heat equation, handling domain setup, boundary conditions, model training, and saving the results.
Parameters t_end : float
The end time for the simulation.
- ntimeint
The number of time steps for the simulation.
- epochsint
The number of epochs for training the model.
- layerint
The number of layers in the neural network.
- widthint
The width of each layer in the neural network.
- activationstr
The activation function to be used in the neural network.
Returns tuple :
A tuple containing the trained models, loss history, and training state.