Skip to content
Snippets Groups Projects
Commit 9d274970 authored by Tizian Wenzel's avatar Tizian Wenzel
Browse files

Initial commit.

parent 96c4529b
No related branches found
No related tags found
No related merge requests found
Showing
with 790 additions and 72 deletions
# paper-2024-pde-greedy
Adaptive meshfree approximation for linear elliptic partial differential equations with PDE-greedy kernel methods
=========================================================================================
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
This repository contains the supplementary material for the publication
```
cd existing_repo
git remote add origin https://gitlab.mathematik.uni-stuttgart.de/pub/ians-anm/paper-2024-pde-greedy.git
git branch -M main
git push -uf origin main
Adaptive meshfree approximation for linear elliptic partial differential equations with PDE-greedy kernel methods (2022)
T. Wenzel, D. Winkle, G. Santin, B. Haasdonk
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.mathematik.uni-stuttgart.de/pub/ians-anm/paper-2024-pde-greedy/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
It was used to carry out the numerical experiments and generate the figures for the publication.
The experiments were performed on Linux systems in 2024 and should work with Python versions of that time (e.g., `python3.9`).
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Installation
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
To run the python experiments, create a virtual environment and install the [PDE-VKOGA](https://gitlab.mathematik.uni-stuttgart.de/pub/ians-anm/pde-vkoga) package.
For convenience, this steps can be done with help of the `setup_python_experiments.sh` file.
To activate the virtual environment, use `source venv/bin/activate`.
In order to reproduce the experiments, run the files within the folder experiments_pde_vkoga.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
To runt he matlab experiments (i.e. to reproduce the FEM part of the convergence table within Section 6.1), please
a) Install RBmatlab from the website www.morepas.org/software. The code can be run if RBmatlab from version 16.09 is installed.
b) run the script fem_sector_example.m from this directory
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
Caution:
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
The FEM calculation is very expensive (takes several hours) as hundred of
thousands of point with global coordinates need to be searched and found
in FEM meshes. This is required to be consistent in the error computation
to other approximation techniques using these uniform test grids.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
## How to cite:
If you use this code in your work, please cite the paper
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
> T. Wenzel, D. Winkle, G. Santin, and B. Haasdonk. Adaptive meshfree solution
of linear partial differential equations with PDE-greedy kernel methods. ArXiv,
(2207.13971), 2022. Submitted.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
```bibtex:
@article{wenzel2022adaptive,
doi = {10.48550/ARXIV.2207.13971},
url = {https://arxiv.org/abs/2207.13971},
author = {Wenzel, Tizian and Winkle, Daniel and Santin, Gabriele and Haasdonk, Bernard},
title = {Adaptive meshfree solution of linear partial differential equations with {PDE}-greedy kernel methods},
number = {2207.13971},
publisher = {arXiv},
year = {2022},
journal={ArXiv},
type={ArXiv},
note={Submitted}
}
}
```
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
function fem_sector_example(step)
%function fem_sector_example(step)
%
% Demonstration of FEM error convergence for sector example.
%
% This operation is very expensive (takes several hours) as hundred of
% thousands of point with global coordinates need to be searched and found
% in FEM meshes. This is required to be consistent in the error computation
% to other approximation techniques using these uniform test grids.
%
% The code can be run if RBmatlab from version 16.09 is installed
% as obtained from www.morepas.org/software
% B. Haasdonk 8.5.2024
if nargin < 1
step = 4;
end;
disp('FEM error convergence study for sector geometry')
disp('Note that the global to local coordinate search is extremely')
disp('expensive due to the many test points, and will overall take')
disp('several hours.')
switch step
% case 1 % create scale of refined gridfiles
% fns = {'sectorg_alpha5over3','sectorg_alpha2over3'};
% for fni = 1:length(fns)
% fn = fns{fni}
% [p,e,t] = initmesh(fn);
% meshfile = [fn,'_r0.mat'];
% save(meshfile,'p','e','t');
% for i = 1:4
% [p,e,t] = refinemesh(fn,p,e,t);
% meshfile = [fn,'_r',num2str(i),'.mat'];
% save(meshfile,'p','e','t');
% end;
% end;
% disp('mesh sequence generated and stored');
% % continue with error computation:
% fem_sector_example(4);
case 2 % load and plot mesh
load('sectorg_alpha5over3_r1','p','t');
grid = triagrid(p,t,[]);
plot(grid);
axis equal;
figure;
load('sectorg_alpha2over3_r1','p','t');
grid = triagrid(p,t,[]);
plot(grid);
axis equal;
case 3 % fem on mesh
params = [];
% params.solution_number = 1; % example with smooth solution
params.solution_number = 2; % example with singular sol, inhom bv
% params.alpha = 5/3;
% params.grid_initfile = 'sectorg_alpha5over3_r1.mat';
% params.grid_initfile = 'sectorg_alpha5over3_r2.mat';
% params.grid_initfile = 'sectorg_alpha5over3_r3.mat';
% params.grid_initfile = 'sectorg_alpha5over3_r4.mat';
params.alpha = 2/3;
params.grid_initfile = 'sectorg_alpha2over3_r1.mat';
% params.grid_initfile = 'sectorg_alpha2over3_r2.mat';
% params.grid_initfile = 'sectorg_alpha2over3_r3.mat';
% params.grid_initfile = 'sectorg_alpha2over3_r4.mat';
model = pacman_model(params);
% generate grid and fem matrices:
model_data = gen_model_data(model);
figure, plot(model_data.grid);
axis equal; axis tight; title('FEM grid')
sim_data = detailed_simulation(model, model_data);
% plot results
figure, plot_sim_data(model,model_data,sim_data,[]);
title('FEM solution of -Laplace u = f')
axis equal;
axis tight;
% disp(['ndofs = ',num2str(sim_data.uh.df_info.ndofs)]);
case 4 % fem on mesh and convergence study
params = [];
params.solution_number = 1; % example with smooth solution
% params.solution_number = 2; % example with singular sol, inhom bv
disp(' ');
disp('ndofs | L2-error | H1-error | infty-error | L2-error-grid | infty-error-grid | t_CPU')
disp('---------------------------------------------------------------------------------------------------------------------')
for i = 1:4
% params.alpha = 5/3;
% params.grid_initfile = ['sectorg_alpha5over3_r',num2str(i),'.mat'];
% params.grid_initfile = 'sectorg_alpha5over3_r2.mat';
% params.grid_initfile = 'sectorg_alpha5over3_r3.mat';
% params.grid_initfile = 'sectorg_alpha5over3_r4.mat';
params.alpha = 2/3;
params.grid_initfile = ['sectorg_alpha2over3_r',num2str(i),'.mat'];
% params.grid_initfile = 'sectorg_alpha2over3_r2.mat';
% params.grid_initfile = 'sectorg_alpha2over3_r3.mat';
% params.grid_initfile = 'sectorg_alpha2over3_r4.mat';
model = pacman_model(params);
% generate grid and fem matrices:
tic
model_data = gen_model_data(model);
sim_data = detailed_simulation(model, model_data);
t = toc;
% error computation:
% project exact solution onto higher degree polynomial fem func
par.pdeg = 4;
par.qdeg = 8;
par.dimrange = 1;
p4_df_info = feminfo(par,model_data.grid);
uexact_h = femdiscfunc([],p4_df_info);
uexact_h = fem_interpol_global(model.solution,uexact_h);
uh = femdiscfunc([],p4_df_info);
u_local_eval = @(grid,elids,lcoord,params) ...
my_uh_local_eval(grid,elids,lcoord,params,sim_data.uh);
uh = fem_interpol_local(u_local_eval,uh);
err = uh - uexact_h;
if i == 1
plot(err);
title('error for i=1')
axis equal;
axis tight;
end;
% keyboard;
l2err = fem_l2_norm(err);
h1err = fem_h1_norm(err);
linftyerr = max(abs(err.dofs));
[l2err_uniformgrid, linftyerr_uniformgrid] = errors_uniformgrid(model,uh);
% l2err_uniformgrid = zeros(size(l2err));
% linftyerr_uniformgrid = zeros(size(l2err));
ndofs = sim_data.uh.df_info.ndofs;
disp([num2str(ndofs,'%10.4d'),' | ',...
num2str(l2err,'%10.5e'),' | ',...
num2str(h1err,'%10.5e'),' | ',...
num2str(linftyerr,'%10.5e'),' | ',...
num2str(l2err_uniformgrid,'%10.5e'),' | ',...
num2str(linftyerr_uniformgrid,'%10.5e'),' | ',...
num2str(t,'%10.5e')]);
end;
otherwise
error('step number unknown');
end;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% auxiliary functions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [l2err_uniformgrid, linftyerr_uniformgrid] = errors_uniformgrid(model, uh);
% determine test points
h = 0.002;
% h = 0.002;
% h = 0.002;
x = -1:h:1;
[XX,YY] = meshgrid(x,x);
i = find((XX.^2+YY.^2)<=1);
XX = XX(i);
YY = YY(i);
Phi = atan(YY./XX);
i = find(Phi<0 & XX>=0);
Phi(i) = Phi(i) + 2*pi;
i = find(XX<0);
Phi(i) = Phi(i) + pi;
i = find(Phi<model.alpha*pi);
XX = XX(i);
YY = YY(i);
Phi = Phi(i);
%scatter3(XX,YY,Phi)
test_sol = model.solution([XX,YY]);
test_uh = zeros(size(test_sol));
% evaluate fem approximation in global points
% bad... : loop over points, should be vectorized...
f = waitbar(0,'Iterating over points');
for i = 1:length(Phi);
if mod(i,1000)==0;
waitbar(i/length(Phi),f,'Iterating over points');
end;
eind = find_triangle(uh.grid,[XX(i),YY(i)]);
if eind>0
lcoord = global2local(uh.grid,eind,[XX(i),YY(i)]);
test_uh(i) = fem_evaluate(uh,eind,lcoord);
% sanity check: transform local coordinate back to global:
p = local2global(uh.grid,eind,lcoord,[]);
d = p-[XX(i),YY(i)];
% if norm(d)>10*eps
% disp('norm of reconstructed point too large!, please inspect');%
% keyboard
% end;
else
test_uh(i) = NaN;
end;
end;
close(f);
linftyerr_uniformgrid = max(abs(test_sol-test_uh));
i = find(~isnan(test_uh));
l2err_uniformgrid = sqrt(h*h*sum((test_sol(i)-test_uh(i)).^2));
%figure;scatter3(XX,YY,double(isnan(test_uh)))
% if isnan(l2err_uniformgrid)
% disp('NaN in error!')
% keyboard;
% end;
function eind = find_triangle(grid,glob)
% returns the index of a triangle in grid containing the global point glob
% if no triangle is found, then -1 is returned.
inside = ones(grid.nelements,1);
for j = 1:3 % check if point is "above" edge connecting point j to j+1
jp1 = mod(j,3)+1;
jp2 = mod(jp1,3)+1;
Xj = grid.X(grid.VI(:,j));
Yj = grid.Y(grid.VI(:,j));
Xjp1 = grid.X(grid.VI(:,jp1));
Yjp1 = grid.Y(grid.VI(:,jp1));
Vjjp1 = [Xjp1-Xj, Yjp1-Yj];
Vjglob = [glob(1)*ones(size(Xj)) - Xj, glob(2)*ones(size(Yj)) - Yj];
crossz = sign(Vjjp1(:,1).*Vjglob(:,2) - Vjjp1(:,2).*Vjglob(:,1));
if j==1
Xjp2 = grid.X(grid.VI(:,jp2));
Yjp2 = grid.Y(grid.VI(:,jp2));
Vjjp2 = [Xjp2-Xj, Yjp2-Yj];
crossz2 = sign(Vjjp1(:,1).*Vjjp2(:,2) - Vjjp1(:,2).*Vjjp2(:,1)); % orientation of trias
end;
i = find(crossz.*crossz2<0);
inside(i) = 0;
end;
eind = find(inside);
if isempty(eind)
eind = -1;
end
if length(eind)>1
% disp('length eind > 1, please check! ');
eind = eind(1);
end;
%if length(eind)==1
% disp('length eind == 1, nice :-) ');
%end;
% settings for pacman model
function model = pacman_model(params);
if ~isfield(params,'alpha')
alpha = 5/3;
else
alpha = params.alpha;
end;
% disp(['chosen alpha = ',num2str(alpha)]);
%alpha = 5/3; % hard coded in sectorg.m if changed here, change there!
model = poisson_model(params);
model.alpha = alpha;
model = rmfield(model,{'boundary_type','normals',...
'xnumintervals','ynumintervals','xrange','yrange'});
model.has_reaction = 0;
model.has_advection = 0;
model.has_diffusivity = 1;
model.has_source = 1;
model.has_dirichlet_values = 1;
model.has_neumann_values = 0;
model.has_robin_values = 0;
model.compute_output_functional = 0;
switch params.solution_number
case 1 % smooth solution
model.solution = @(glob,params) ...
sum(glob.^2,2);
model.source = @(glob,params) ...
- 4 * ones(size(glob,1),1);
case 2 % solution with singularity and inhomogeneous bnd val.
model.solution = @(glob,params) ...
pacman_exact_solution(glob',alpha);
model.source = @(glob,params) ...
neg_Laplace_pacman_exact_solution(glob',alpha);
% case 3 % solution wiht singularity and homogeneous bnd val.
% % someting seems to be buggy here, no convergence observed...
% model.solution = @(glob,params) ...
% pacman_exact_solution2(glob',alpha);
% model.source = @(glob,params) ...
% neg_Laplace_pacman_exact_solution2(glob',alpha);
% error('please use solution_number=1 or 2.')
end;
model.diffusivity_tensor = @(glob,params) ...
[ones(size(glob,1),1),...
zeros(size(glob,1),1),...
zeros(size(glob,1),1),...
ones(size(glob,1),1)];
model.reaction = @(glob,params) zeros(size(glob,1),1);
model.dirichlet_values = @(glob,params) ...
params.solution(glob,params);
model.grid_initfile = params.grid_initfile;
model.gridtype = 'triagrid';
model.pdeg = 1;
model.qdeg = 2;
model.dimrange = 1;
model = elliptic_discrete_model(model);
%model.detailed_simulation = @pacman_detailed_simulation;
function f = pacman_exact_solution(x,alpha);
% function u(x) = |x|^(1/alpha)sin(phi(x)/alpha)
% with phi(x) = atan(x2/x1).
% which has -Laplace u = 0 on pacman shape
% but non-homogeneous boundary values.
f1 = sum(x.^2,1).^(0.5/alpha);
phi = atan(x(2,:)./x(1,:));
i = find(phi<0 & x(1,:)>=0);
phi(i) = phi(i) + 2*pi;
i = find(x(1,:)<0);
phi(i) = phi(i) + pi;
f2 = sin(phi/alpha);
i = find(isnan(f2));
f2(i) = 0;
f = f1.*f2;
function f = neg_Laplace_pacman_exact_solution(x,alpha);
f = zeros(size(x,2),1);
function res = my_uh_local_eval(grid,elids,lcoord,params,df)
% dummy function used for evaluating a discrete function at finer
% lagrange-grid nodes
res = fem_evaluate(df,elids,lcoord,[],[]);
function [x,y]=sectorg_alpha2over3(bs,s)
nbs=3;
alpha = 2/3 * pi;
if nargin==0,
x=nbs; % number of boundary segments
return
end
d=[
0 0 1% start parameter value
1 alpha 0% end parameter value
1 1 1% left hand region
0 0 0% right hand region
];
bs1=bs(:)';
if find(bs1<1 | bs1>nbs),
error('semicircleg:InvalidBs', 'Non existent boundary segment number.')
end
if nargin==1,
x=d(:,bs1);
return
end
x=zeros(size(s));
y=zeros(size(s));
[m,n]=size(bs);
if m==1 && n==1,
bs=bs*ones(size(s)); % expand bs
elseif m~=size(s,1) || n~=size(s,2),
error('semicircleg:SizeBs', 'bs must be scalar or of same size as s.');
end
if ~isempty(s),
% boundary segment 1
ii=find(bs==1);
x(ii) = s(ii);
y(ii) = zeros(size(ii));
% boundary segment 2
ii=find(bs==2);
x(ii) = cos(s(ii));
y(ii) = sin(s(ii));
% boundary segment 3
ii=find(bs==3);
x(ii) = s(ii)*cos(alpha);
y(ii) = s(ii)*sin(alpha);
end
File added
File added
File added
File added
File added
File added
# Code for the two sector domain examples of the paper "Adaptive meshfree approximation for linear
# elliptic partial differential equations with PDE-greedy kernel methods" by T. Wenzel, D. Winkle, G. Santin, B. Haasdonk
# Some imports
import numpy as np
from datetime import datetime
from matplotlib import pyplot as plt
from vkoga_pde.kernels_PDE import cubicMatern_laplace
from vkoga_pde.vkoga_PDE import VKOGA_PDE
from experiments_pde_vkoga.utilities_pacman import sample_domain_pacman, get_function_pacman
np.random.seed(1)
## Create some data: Circle without cone
dim=2
N1 = int(4e5)
N2 = int(3e3)
maxIter = 1305 # 341, 1305
## Pick a kernel
kernel = cubicMatern_laplace(dim=2)
## Set up lists to loop over
list_alpha = [2/3]
list_str_function = ['singular'] # 'smooth'
list_beta = [1]
list_weight_dirichlet = [10**3]
n_iter = len(list_alpha) * len(list_str_function) * len(list_beta) * len(list_weight_dirichlet)
## Actually start the computations
idx_counter = 0
dic_results = {}
for alpha in list_alpha:
dic_results[alpha] = {}
# Sample the domain
X1, X2, X1_grid = sample_domain_pacman(N1, N2, alpha)
for str_function in list_str_function:
dic_results[alpha][str_function] = {}
# Define some functions: Example from Bernard
u, f = get_function_pacman(str_function, alpha)
# Compute solution on fine grid
u_X1_grid = u(X1_grid)
# Define right hand side values
y1, y2 = f(X1), u(X2)
y1_sol = u(X1)
for beta in list_beta:
dic_results[alpha][str_function][beta] = {}
for weight_dirichlet in list_weight_dirichlet:
dic_results[alpha][str_function][beta][weight_dirichlet] = {}
# Compute stuff
model_pde_vkoga = VKOGA_PDE(kernel=kernel, beta=beta, weight_dirichlet=weight_dirichlet, verbose=True)
_ = model_pde_vkoga.fit(X1, y1, X2, y2, maxIter=maxIter, y1_sol=y1_sol)
# Store values
dic_results[alpha][str_function][beta][weight_dirichlet]['f_sol'] = np.array(model_pde_vkoga.train_hist['f sol'])
dic_results[alpha][str_function][beta][weight_dirichlet]['f'] = np.array(model_pde_vkoga.train_hist['f'])
dic_results[alpha][str_function][beta][weight_dirichlet]['n_ctrs_bdry'] = len(model_pde_vkoga.ind_ctrs2)
idx_counter += 1
# save dictionary
print(datetime.now().strftime("%H:%M:%S"), 'Computation {}/{} finished.'.format(idx_counter, n_iter))
# Compute some test errors (to be sure that there is no overfitting!)
y_pred_s_ = model_pde_vkoga.predict_s(X1_grid)
diff_s_ = y_pred_s_ - u(X1_grid)
y_pred_Ls_ = model_pde_vkoga.predict_Ls(X1_grid)
diff_Ls_ = y_pred_Ls_ - f(X1_grid)
print('Max error on grid: ', np.max(np.abs(diff_s_)))
print('L2 error on grid: ', np.linalg.norm(diff_s_)/np.sqrt(len(diff_s_)))
## Plot the points
plt.figure(11)
plt.clf()
plt.plot(model_pde_vkoga.ctrs_[model_pde_vkoga.ind_ctrs1][:, 0], model_pde_vkoga.ctrs_[model_pde_vkoga.ind_ctrs1][:, 1], 'ro', markersize=3)
plt.plot(model_pde_vkoga.ctrs_[model_pde_vkoga.ind_ctrs2][:, 0], model_pde_vkoga.ctrs_[model_pde_vkoga.ind_ctrs2][:, 1], 'kx', markersize=10)
# axis ratio equal
plt.gca().set_aspect('equal')
plt.show(block=False)
## Plot the decay of the approximation errors
plt.figure(12)
plt.clf()
plt.plot(model_pde_vkoga.train_hist['f sol'])
# plot horizonal line at np.max(np.abs(diff_s))
# plt.axhline(y=np.max(np.abs(diff_s_)), color='r', linestyle='-')
plt.yscale('log')
plt.xscale('log')
plt.title('|u-s_n|')
plt.show(block=False)
plt.figure(13)
plt.clf()
plt.plot(np.array(model_pde_vkoga.train_hist['f'])**(1/2))
# plot horizonal line at np.max(np.abs(diff_Ls))
# plt.axhline(y=np.max(np.abs(diff_Ls_)), color='r', linestyle='-')
plt.yscale('log')
plt.xscale('log')
plt.title('|L(u-s_n)|')
plt.show(block=False)
# Code for the high dimensional example of the paper "Adaptive meshfree approximation for linear
# elliptic partial differential equations with PDE-greedy kernel methods" by T. Wenzel, D. Winkle, G. Santin, B. Haasdonk
# Some imports
import numpy as np
from datetime import datetime
from matplotlib import pyplot as plt
from vkoga_pde.kernels_PDE import cubicMatern_laplace, Gaussian_laplace
from vkoga_pde.vkoga_PDE import VKOGA_PDE
np.random.seed(1)
# Create data set: Square domain
dim = 12
scale_domain = 1/np.sqrt(dim)
X1 = scale_domain * np.random.rand(int(1e5), dim)
X2 = scale_domain * np.random.rand(2*dim*400, dim)
# Set boundary values
for idx_dim in range(dim):
X2[2*idx_dim*400 : (2*idx_dim+1)*400, idx_dim] = 0
X2[(2*idx_dim+1) * 400 : (2*idx_dim + 2) * 400, idx_dim] = scale_domain
X1_test = scale_domain * np.random.rand(int(1e5), dim)
u = lambda x: np.sum(x * x, axis=1, keepdims=True) + 1 # solution u(x) = 1/d norm(x)^2 => max in 0,1 on cube
f = lambda x: -2 * dim * np.ones((x.shape[0], 1)) # f = -2 dim on unit cube
# Define right hand side values
y1 = f(X1)
y1_sol = u(X1)
y2 = u(X2)
# Initialize and run models
kernel = cubicMatern_laplace(dim=dim)
maxIter = 1001
list_beta = [1]
list_weightings = [1e0, 1e3, 1e5]
n_iter = len(list_beta) * len(list_weightings)
## Actually start the computations
idx_counter = 0
dic_results_train = {}
list_models = []
for beta in list_beta:
dic_results_train[beta] = {}
for weight_dirichlet in list_weightings:
dic_results_train[beta][weight_dirichlet] = {}
# Compute stuff
model_pde_vkoga = VKOGA_PDE(kernel=kernel, beta=beta, weight_dirichlet=weight_dirichlet, verbose=True)
_ = model_pde_vkoga.fit(X1, y1, X2, y2, maxIter=maxIter, y1_sol=y1_sol)
list_models.append(model_pde_vkoga)
# Store values
dic_results_train[beta][weight_dirichlet]['f_sol'] = np.array(model_pde_vkoga.train_hist['f sol'])
dic_results_train[beta][weight_dirichlet]['f'] = np.array(model_pde_vkoga.train_hist['f'])
dic_results_train[beta][weight_dirichlet]['n_ctrs_bdry'] = len(model_pde_vkoga.ind_ctrs2)
idx_counter += 1
print(datetime.now().strftime("%H:%M:%S"), 'Computation {}/{} finished.'.format(idx_counter, n_iter))
## For intermediate model sizes, compute test errors
list_n_exp_size = list(np.geomspace(10, model_pde_vkoga.ctrs_.shape[0], 20, dtype=int))
dic_results_test = {}
for beta in list_beta:
dic_results_test[beta] = {}
for idx_model, model in enumerate(list_models): # this corresponds to the different weightings!!
print(datetime.now().strftime("%H:%M:%S"), 'Computation for model {}/{} started.'.format(idx_model, len(list_models)))
dic_results_test[beta][list_weightings[idx_model]] = {}
dic_results_test[beta][list_weightings[idx_model]]['list_s_test_pred'] = []
dic_results_test[beta][list_weightings[idx_model]]['list_Ls_test_pred'] = []
for n_expansion_size in list_n_exp_size:
print(datetime.now().strftime("%H:%M:%S"), 'Computation n_expansion_size = {} started.'.format(n_expansion_size))
# Restrict indices to intermediate model sizes
ind_ctrs1 = [idx for idx in model.ind_ctrs1 if idx < n_expansion_size]
ind_ctrs2 = [idx for idx in model.ind_ctrs2 if idx < n_expansion_size]
ind_ctrs3 = [idx for idx in model.ind_ctrs3 if idx < n_expansion_size]
# Compute coefficients for the restricted problem
coef_ = model.Cut_[:n_expansion_size, :n_expansion_size].transpose() @ model.c[:n_expansion_size]
# predict s (code taken from PDE VKOGA)
s_test_pred = model.kernel.d2_eval(X1_test, model.ctrs_[ind_ctrs1]) @ coef_[ind_ctrs1] \
+ model.kernel.eval(X1_test, model.ctrs_[ind_ctrs2]) @ coef_[ind_ctrs2] \
+ model.kernel.mixed_k_n(X1_test, model.ctrs_[ind_ctrs3], model.n_ctrs_[ind_ctrs3]) @ coef_[ind_ctrs3]
# predict Ls (code taken from PDE VKOGA)
Ls_test_pred = model.kernel.dd_eval(X1_test, model.ctrs_[ind_ctrs1]) @ coef_[ind_ctrs1] \
+ model.kernel.d1_eval(X1_test, model.ctrs_[ind_ctrs2]) @ coef_[ind_ctrs2] \
+ model.kernel.mixed_L_n(X1_test, model.ctrs_[ind_ctrs3], model.n_ctrs_[ind_ctrs3]) @ coef_[ind_ctrs3]
# Compute errors
diff_s = s_test_pred - u(X1_test)
diff_Ls = Ls_test_pred - f(X1_test)
# Append errors
dic_results_test[beta][list_weightings[idx_model]]['list_s_test_pred'].append(np.max(np.abs(diff_s)))
dic_results_test[beta][list_weightings[idx_model]]['list_Ls_test_pred'].append(np.max(np.abs(diff_Ls)))
plt.figure(1001)
plt.clf()
for weight in list_weightings:
plt.plot(list_n_exp_size, dic_results_test[beta][weight]['list_s_test_pred'], 'x-', label='{:.1e}'.format(weight))
plt.yscale('log')
plt.xscale('log')
plt.title('Max error on test set')
plt.legend(loc='lower left')
plt.show(block=False)
plt.figure(1002)
plt.clf()
for weight in list_weightings:
plt.plot(list_n_exp_size, dic_results_test[beta][weight]['list_Ls_test_pred'], 'x-', label='{:.1e}'.format(weight))
plt.yscale('log')
plt.xscale('log')
plt.title('Max L-error on test set')
plt.legend(loc='lower left')
plt.legend()
plt.show(block=False)
import numpy as np
import math
dim = 2
def get_function_pacman(str_function, alpha=None):
if str_function == 'singular':
assert alpha is not None, 'Provide alpha value for str_function = singular!'
u = lambda x: np.linalg.norm(x, axis=1, keepdims=True) ** (1 / alpha) \
* np.sin(my_arctan(x[:, [0]], x[:, [1]]) / alpha) + 1
f = lambda x: np.zeros((x.shape[0], 1))
elif str_function == 'smooth':
u = lambda x: np.linalg.norm(x, axis=1, keepdims=True) ** 2 + 1
f = lambda x: -4 * np.ones((x.shape[0], 1))
return u, f
def my_arctan(x1, x2):
# Returns angle in the interval [0, 2pi]
phi = np.arctan2(x2, x1)
phi += (phi < 0) * 2 * math.pi
return phi
def sample_domain_pacman(n1, n2, alpha):
assert 0 <= alpha <= 2, 'alpha must satisfy 0 <= alpha * pi <= 2pi'
# Create interior
X1 = 2*np.random.rand(n1, dim) - 1
X1 = X1[np.linalg.norm(X1, axis=1) < 1]
X1 = X1[my_arctan(X1[:, 0], X1[:, 1]) < math.pi * alpha, :] # this is not 100% correct due to points with y=0
# Create boundary
array_linspace = np.linspace(0, 1, n2).reshape(-1, 1)
X2_Gamma1 = np.concatenate((array_linspace, np.zeros_like(array_linspace)), axis=1)
X2_Gamma2 = np.concatenate((np.cos(array_linspace * alpha * math.pi), np.sin(array_linspace * alpha * math.pi)), axis=1)
X2_Gamma3 = np.concatenate((np.cos(alpha * math.pi) * array_linspace,
np.sin(alpha * math.pi) * array_linspace), axis=1)
X2 = np.concatenate((X2_Gamma1, X2_Gamma2, X2_Gamma3), axis=0)
# Create a meshgrid using numpy.meshgrid
X1_grid0_, X1_grid1_ = np.meshgrid(np.linspace(-1, 1, 1001), np.linspace(-1, 1, 1001))
X1_grid = np.concatenate((X1_grid0_.reshape(-1, 1), X1_grid1_.reshape(-1, 1)), axis=1)
X1_grid = X1_grid[np.linalg.norm(X1_grid, axis=1) < 1, :] # shrink to circle
X1_grid = X1_grid[my_arctan(X1_grid[:, 0], X1_grid[:, 1]) < alpha * math.pi, :] # shrink to angle
return X1, X2, X1_grid
\ No newline at end of file
set -e
export BASEDIR="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)"
cd "${BASEDIR}"
# Create and source virtualenv
if [ -e "${BASEDIR}/venv/bin/activate" ]; then
echo "using existing virtualenv"
else
echo "creating virtualenv ..."
virtualenv --python=python3 venv
fi
source venv/bin/activate
# Upgrade pip and install libraries (dependencies are also installed)
pip install --upgrade pip
pip install git+https://gitlab.mathematik.uni-stuttgart.de/pub/ians-anm/pde-vkoga@v0.1.1
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment