{"id":654,"date":"2025-06-29T14:46:13","date_gmt":"2025-06-29T13:46:13","guid":{"rendered":"https:\/\/www.alanknipmeyer.phd\/?p=654"},"modified":"2025-06-29T16:22:51","modified_gmt":"2025-06-29T15:22:51","slug":"end-of-june-updates","status":"publish","type":"post","link":"https:\/\/www.alanknipmeyer.phd\/index.php\/2025\/06\/29\/end-of-june-updates\/","title":{"rendered":"End of June Updates !"},"content":{"rendered":"\n<p>Here we are, the end of June and so much progress, not much blogging.. still heres a succient update.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Research Ethical Approval<\/h2>\n\n\n\n<p>As a post graduate researcher I have to ensure my research is ethical and approved by the University. Bournemouth university provide an extensive <a href=\"https:\/\/www.bournemouth.ac.uk\/research\/research-environment\/research-governance-integrity\/how-apply-formal-ethics-review\">ethics checks list<\/a> to which I have to review and provide the necessary information. Essentially I have to provide details to ensure that what I am doing is not 1) illegal 2) ethically unsound.<\/p>\n\n\n\n<p>I was able to provide all the required evidence to show my data acquisition, storage and use, ensuring that at no point that my research would encroach on the public domain. After completing all the forms and submitting I was please that my research was approved and I could safely use and store data in the way I had given.<\/p>\n\n\n\n<p>What is important here is that *if* my data has been acquired outside of my lab, i.e. there is plenty of abundant RF sources of IoT devices available, this would of been far more involved. The more you engage with external environments\/people, leave plenty of time to write up your ethics and data plan !<\/p>\n\n\n\n<p>TL\/DR &#8211; Even in a &#8220;simple&#8221; research lab, getting ethical approval is essential, and takes time to complete. Allow plenty of time ahead of starting your research to get ethical approval to allow to start collecting and using data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Lab updates<\/h2>\n\n\n\n<p>So alot of my time has been spent on building the lab up. This involves data acquisition, processing and user interfaces. I&#8217;ll do a short summary of each<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Aquisition\n<ul class=\"wp-block-list\">\n<li>Chipwhisperer updates\n<ul class=\"wp-block-list\">\n<li>I&#8217;ve had the chipwhisperer and boards since my MSc in 2022, there have been quite a few updates in this time so I&#8217;ve updated my board and software. Safe to say that I&#8217;m excited that everything is working well and tested the board out to get AES traces from the board and run the example test labs from Juypter notebooks, show here is DPA to obtain a password.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full is-style-default\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1007\" height=\"826\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/Lab2_1B_PowerAnalaysis_3.jpg\" alt=\"\" class=\"wp-image-658\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/Lab2_1B_PowerAnalaysis_3.jpg 1007w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/Lab2_1B_PowerAnalaysis_3-300x246.jpg 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/Lab2_1B_PowerAnalaysis_3-768x630.jpg 768w\" sizes=\"(max-width: 1007px) 100vw, 1007px\" \/><figcaption class=\"wp-element-caption\">Differenential Power Analysis to extract a password via brute force and signal processing.<\/figcaption><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Slurm Cluster\n<ul class=\"wp-block-list\">\n<li>MPI + PMIX\n<ul class=\"wp-block-list\">\n<li>OpenMPI and PMIX has been added to the slurm cluster. As I have now have 3 GPU Nodes with a variety of GPU&#8217;s (P40\/P100\/RTX3050) and want the ability to mesh between them , i.e. training on the P40&#8217;s, inference on the P100s, 3050 for general use) and to pass messages between the nodes as needed. MPI allows parallel the processes running on different nodes to communicate with each other. This combines with PMIx to manage the processes between the nodes. What&#8217;s really cool about this, is that i can recognize the bottle necks in my setup, i can still leverage as many or as few GPU&#8217;s as I like thanks to the new partitioning schema. If want to do a batch train and then use that model i can setup the slurm job and python script to go from training\/inference. This will be very useful when doing hyperparameter tuning and classification validation.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/3f5a2caa-6639-4d85-a7e5-e1f21335e23c.png\" alt=\"\" class=\"wp-image-663\" style=\"width:377px;height:auto\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/3f5a2caa-6639-4d85-a7e5-e1f21335e23c.png 1024w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/3f5a2caa-6639-4d85-a7e5-e1f21335e23c-300x300.png 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/3f5a2caa-6639-4d85-a7e5-e1f21335e23c-150x150.png 150w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/3f5a2caa-6639-4d85-a7e5-e1f21335e23c-768x768.png 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Slurm Cluster with MPI and PMIX<\/figcaption><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Array\/Parittion\/QoS\n<ul class=\"wp-block-list\">\n<li>With a variety of GPU and CPU types in the cluster I wanted to add more specificity, in slurm this is handled via general resource allocation (GRES) and assigning nodes to various partitions. For my research i generated the following<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>partition<\/td><td>nodes<\/td><td>nodelist<\/td><\/tr><tr><td>debug<\/td><td>5<\/td><td>erebus,slurm-cpu[1-2],tuf,zeus<\/td><\/tr><tr><td>gpu-inference<\/td><td>1<\/td><td>erebus<\/td><\/tr><tr><td>gpu-train<\/td><td>1<\/td><td>zeus<\/td><\/tr><tr><td>gpu-pseries<\/td><td>2<\/td><td>erebus,zeus<\/td><\/tr><tr><td>gpu-gen<\/td><td>1<\/td><td>tuf<\/td><\/tr><tr><td>cpu*<\/td><td>2<\/td><td>slrum-cpu[1-2]<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>I&#8217;m now able to define in my slurm batch or srun which GPU\/CPU partition to use. To prove everything was running well, i setup a simple &#8216;multinode&#8217; job which evidenced which node was being utilized for the task.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>#!\/bin\/bash\n#SBATCH --job-name=my_array_test\n#SBATCH --output=logs\/job_%A_%a.out\n#SBATCH --error=logs\/job_%A_%a.err\n#SBATCH --array=1-1000\n#SBATCH --ntasks=1\n#SBATCH --time=00:10:00\n#SBATCH --mem=1G\n#SBATCH --partition=debug\n\necho \"Running job array task ${SLURM_ARRAY_TASK_ID} on node $(hostname)\"\nsleep $(( RANDOM % 30 ))\n<\/code><\/pre>\n\n\n\n<p>I could run squeue to see all the nodes in the &#8216;debug&#8217; parititon fully used<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)\n   4066_&#91;146-1000]     debug my_array     alze PD       0:00      1 (Resources)\n          4066_142     debug my_array     alze  R       0:01      1 erebus\n          4066_143     debug my_array     alze  R       0:01      1 tuf\n          4066_145     debug my_array     alze  R       0:01      1 slurm-cpu2\n          4066_137     debug my_array     alze  R       0:04      1 zeus\n          4066_138     debug my_array     alze  R       0:04      1 slurm-cpu2\n          4066_139     debug my_array     alze  R       0:04      1 erebus\n          4066_141     debug my_array     alze  R       0:04      1 erebus\n          4066_132     debug my_array     alze  R       0:07      1 slurm-cpu1\n      <\/code><\/pre>\n\n\n\n<p>Whilst I&#8217;m the sole user of the cluster, it is netherheless good to get familiar with accounting and QoS, one thing to consider is that I don&#8217;t have the cluster running full-tilt during the day time when I&#8217;m working as with all the fans on it can get quite noisy. I do aim to sort out the environmental conditions in due course, but requires a whole lab move to do so..  I setup a simple MariaDB for the accounting data and simple QoS patterns<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>acctmgr list account format=Name,Description,Organization\n      Name                Descr                  Org \n---------- -------------------- -------------------- \n                  mylab account                mylab \n                 prolab account               prolab \n                 reslab account               reslab \n           default root account                 root \n                  test accounts         knipmeyer-it \n            unrestricted access         unrestricted <\/code><\/pre>\n\n\n\n<p>The slurm account management tool &#8216;sacct&#8217; can produce really good reports on usage, its possible to see the allocated amount of CPU\/GPUs and time used. In a real-life cluster (such as SCW) the amount of GPU&#8217;s requested for a batch job leads to longer wait times for those GPU&#8217;s to become available. For me, i have no such QoS setup as yet, but I now familiar with how to set this up.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Slurm WebUI &#8211; OnDemand with DEX \/ LDAPS<\/li>\n<\/ul>\n\n\n\n<p>Whilst having ssh\/cli access via native clients is fine, having a WebUI to interface with allows use of the lab from any location i can acesss the lab from. To enable a more &#8216;user friendly&#8217; interface to the slurm cluster I installed &#8216;open ondemand&#8217;. To acheive this i basically clone a login node where I would ssh into and run the slurm command line tools (sbatch\/srun,etc) and add the Open OnDemand packages. Because I already have LDAP(S) and shared home directories setup on the cluster, it was a case of getting ondemand and Dex to speak to the LDAP server, this took quite a bit of time, but nether the less a few hours of googling\/reading documentation lead to a working configuration ! Whilst I still have alot more to learn about adding more templates\/jobs and interactive apps to OnDemand, I&#8217;m really pleased with the WebUI.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img decoding=\"async\" width=\"1024\" height=\"403\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1-1024x403.png\" alt=\"\" class=\"wp-image-664\" style=\"width:430px;height:auto\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1-1024x403.png 1024w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1-300x118.png 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1-768x302.png 768w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1-1536x605.png 1536w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-1.png 1722w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Login screen to OnDemand using my LDAP Credentials<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"649\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-2-1024x649.png\" alt=\"\" class=\"wp-image-665\" style=\"width:421px;height:auto\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-2-1024x649.png 1024w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-2-300x190.png 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-2-768x487.png 768w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-2.png 1192w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Slurm Custer Status<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"371\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-1024x371.png\" alt=\"\" class=\"wp-image-667\" style=\"width:519px;height:auto\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-1024x371.png 1024w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-300x109.png 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-768x279.png 768w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-1536x557.png 1536w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-4-2048x743.png 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>File browser (with upload !)<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"619\" src=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-5-1024x619.png\" alt=\"\" class=\"wp-image-668\" style=\"width:532px;height:auto\" srcset=\"https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-5-1024x619.png 1024w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-5-300x181.png 300w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-5-768x464.png 768w, https:\/\/www.alanknipmeyer.phd\/wp-content\/uploads\/2025\/06\/image-5.png 1394w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Interactive Shell to Login node<\/p>\n\n\n\n<p>I&#8217;m planning on putting a reverse proxy \/ nginx with SSL in front so I can access the OnDemand portal from ouside of the lab and put suitable hardening\/IP whitelisting to allow access.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">External Engagements &#8211; Ches Challenge 2025<\/h2>\n\n\n\n<p>So as I now have the cluster setup, I thought it fun to try a live challenge on it. The <a href=\"https:\/\/pace-tl.gitbook.io\/ches-challenge-2025\">CHES Challenge 2025<\/a> involves cryptanalysis using Python, and introduces some interesting problems to overcome. Whilst the datasets are provided and labeled, they are containing noise and jitter &#8211; in my own lab I aim to remove these before pre-processing, in a real world environment of side channel attacks, its more likely to have these and then provide software solutions in the training phase to overcome noise and jitter in the electromagnetic trace collections.<\/p>\n\n\n\n<p>So far I have written functions for jitter and noise, but am facing the classic issue of oversizing, so my results are still way off, but nethertheless I am not discouraged. I am working on limiting the amount of epochs based on the % of fit and results, which should reduce the data becoming only as good or worse than brute-force\/random attacks.<\/p>\n\n\n\n<p>Whilst the contest is still live and finshes on August 15th, I will share what I have learned after that time \ud83d\ude42<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>train Epoch Loss: 0.0275 Epoch Acc: 0.9974\nval Epoch Loss: 11.2832 Epoch Acc: 0.201<\/code><\/pre>\n\n\n\n<p> Its oversizing&#8230; :\\<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Progressing the literature review<\/h2>\n\n\n\n<p>So when I&#8217;m not programming, building out new features in the lab, I&#8217;m reading. Alot of reading. Part of this is build knolwedge and references for the all important literature review section of the thesis. Work on this is goes on across the whole thesis writing stages almost up until the viva\/defence. <\/p>\n\n\n\n<p>I took a recent publication which reviews many types of encryption attacks by <a href=\"https:\/\/arxiv.org\/pdf\/2402.10030\">zunaid<\/a> (bibtex ref below).<\/p>\n\n\n\n<p>The whole publication is excellent, almost a &#8216;readers digest&#8217; of recent cryptanalysis publications. Whilst I read thru the paper, the excellent appendix containing the information into a tabulated format. I transposed this to Excel and then filtered them, from this it gave me an excellent set of publications to read through and provide substance to my lit review.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@article{zunaidi2024systematic,\n  title={Systematic Literature Review of EM-SCA Attacks on Encryption},\n  author={Zunaidi, Muhammad Rusyaidi and Sayakkara, Asanka and Scanlon, Mark},\n  journal={arXiv preprint arXiv:2402.10030},\n  year={2024}\n}<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">50+ Part-Time PGR &#8211; Wellbeing<\/h2>\n\n\n\n<p>So I&#8217;m into my 50&#8217;s now, as well as working 9-5 I make time every single day to get more research activities in. This leads into alot of desk time, if I dont tell myself this is easily 18-19 hours (i&#8217;m not making this up..) at my desk with a break for walking the dog and the routine things. As ever, this has lead to increase to my weight, poor back posture (aches and pains) and overall just not being as fit as I should.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.bournemouth.ac.uk\/why-bu\/sportbu\">Bournemouth University<\/a> has an excellent gym, being a sports-science university, it has more than the usual running machines and weights. I&#8217;m going to check it out and hopefully get into a work-out regime I&#8217;ve not had in over 10 years, and looking forward to getting a bit fitter and feeling better in myself !<\/p>\n\n\n\n<p>Well, this has taken me longer than I thought to write up, I&#8217;ve probably missed loads, but I think this is good for now \ud83d\ude42<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Here we are, the end of June and so much progress, not much blogging.. still heres a succient update. Research Ethical Approval As a post graduate researcher I have to ensure my research is ethical and approved by the University. Bournemouth university provide an extensive ethics checks list to which I have to review and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-654","post","type-post","status-publish","format-standard","hentry","category-uncategorised"],"_links":{"self":[{"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/posts\/654","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/comments?post=654"}],"version-history":[{"count":3,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/posts\/654\/revisions"}],"predecessor-version":[{"id":672,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/posts\/654\/revisions\/672"}],"wp:attachment":[{"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/media?parent=654"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/categories?post=654"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.alanknipmeyer.phd\/index.php\/wp-json\/wp\/v2\/tags?post=654"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}