Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
D
DualTVDD.jl
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Container registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Stephan Hilb
DualTVDD.jl
Commits
74143c2d
Commit
74143c2d
authored
Jan 7, 2024
by
Stephan Hilb
Browse files
Options
Downloads
Patches
Plain Diff
update scaling example
parent
245a5ba8
No related branches found
No related tags found
No related merge requests found
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
scripts/run_experiments.jl
+12
-9
12 additions, 9 deletions
scripts/run_experiments.jl
scripts/run_parallel_scaling.jl
+20
-0
20 additions, 0 deletions
scripts/run_parallel_scaling.jl
with
32 additions
and
9 deletions
scripts/run_experiments.jl
+
12
−
9
View file @
74143c2d
# experiments
# experiments
using
Distributed
using
Random
:
MersenneTwister
using
Random
:
MersenneTwister
using
LinearAlgebra
:
Diagonal
,
I
,
dot
,
norm
,
mul!
,
diagm
using
LinearAlgebra
:
Diagonal
,
I
,
dot
,
norm
,
mul!
,
diagm
using
Statistics
:
median
using
Statistics
:
median
using
CSV
using
Colors
:
HSV
using
DataFrames
using
Distributed
@everywhere
using
DualTVDD
@everywhere
using
DualTVDD
@everywhere
using
DualTVDD
:
@everywhere
using
DualTVDD
:
DualTVL1ROFOpProblem
,
DualTVL1ROFOpProblem
,
...
@@ -15,6 +12,10 @@ using Distributed
...
@@ -15,6 +12,10 @@ using Distributed
SurrogateAlgorithm
,
SurrogateAlgorithm
,
DualTVDDSurrogateAlgorithm
,
DualTVDDSurrogateAlgorithm
,
init
,
step!
,
fetch
,
fetch_u
,
gradient
,
divergence
,
normB
,
energy
,
residual
init
,
step!
,
fetch
,
fetch_u
,
gradient
,
divergence
,
normB
,
energy
,
residual
using
CSV
using
Colors
:
HSV
using
DataFrames
using
DualTVDD
:
fetch_u
using
DualTVDD
:
fetch_u
using
FFTW
using
FFTW
using
FileIO
using
FileIO
...
@@ -270,10 +271,11 @@ function experiment_scaling_opticalflow(ctx)
...
@@ -270,10 +271,11 @@ function experiment_scaling_opticalflow(ctx)
λ
=
0.01
λ
=
0.01
β
=
0.001
β
=
0.001
ninner
=
300
ninner
=
500
Mdir
=
2
*
floor
(
Int
,
sqrt
(
nworkers
()))
# to have enough workers available
#Mdir = 2 * floor(Int, sqrt(nworkers())) # to have enough workers available
M
=
(
Mdir
,
Mdir
)
Mdir
=
nworkers
()
overlap
=
(
5
,
5
)
M
=
(
4
,
Mdir
)
overlap
=
(
10
,
10
)
stopenergy
=
130.
stopenergy
=
130.
ntimings
=
3
ntimings
=
3
...
@@ -298,6 +300,7 @@ function experiment_scaling_opticalflow(ctx)
...
@@ -298,6 +300,7 @@ function experiment_scaling_opticalflow(ctx)
tg
=
timeit
(
galg
)
tg
=
timeit
(
galg
)
# divide by 2^2 due to coloring
nparallel
=
prod
(
M
)
÷
2
^
2
nparallel
=
prod
(
M
)
÷
2
^
2
ws
=
workers
()
ws
=
workers
()
@assert
nparallel
<=
length
(
ws
)
@assert
nparallel
<=
length
(
ws
)
...
@@ -315,7 +318,7 @@ function experiment_scaling_opticalflow(ctx)
...
@@ -315,7 +318,7 @@ function experiment_scaling_opticalflow(ctx)
CSV
.
write
(
joinpath
(
ctx
.
outdir
,
"timings.csv"
),
df
)
CSV
.
write
(
joinpath
(
ctx
.
outdir
,
"timings.csv"
),
df
)
savedata
(
joinpath
(
ctx
.
outdir
,
"data.tex"
);
savedata
(
joinpath
(
ctx
.
outdir
,
"data.tex"
);
lambda
=
λ
,
beta
=
β
,
M
dir
,
M
=
prod
(
M
),
lambda
=
λ
,
beta
=
β
,
M
1
=
M
[
1
],
M2
=
M
[
2
]
,
M
=
prod
(
M
),
ntimings
,
stopenergy
,
ninner
,
ntimings
,
stopenergy
,
ninner
,
width
=
size
(
fo
,
2
),
height
=
size
(
fo
,
1
))
width
=
size
(
fo
,
2
),
height
=
size
(
fo
,
1
))
end
end
...
...
This diff is collapsed.
Click to expand it.
scripts/run_parallel_scaling.jl
0 → 100644
+
20
−
0
View file @
74143c2d
# do not use a startup file: `julia --startup-file=no`
# otherwise precompilation will be done over and over again
using
Pkg
Pkg
.
activate
(
@__DIR__
)
Pkg
.
instantiate
()
using
Distributed
addprocs
(
8
)
@everywhere
using
Pkg
@everywhere
Pkg
.
activate
(
@__DIR__
)
@everywhere
Pkg
.
instantiate
()
@everywhere
using
Revise
includet
(
joinpath
(
@__DIR__
,
"run_experiments.jl"
))
const
datapath
=
joinpath
(
@__DIR__
,
".."
,
".."
,
"data"
)
ctx
=
Util
.
Context
(
datapath
)
ctx
(
experiment_scaling_opticalflow
,
"fd/scaling/opticalflow"
)
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment