Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] GEOM184 - Open Source GIS Practical 3 2024/2025Java

GEOM184 - Open Source GIS Practical 3 Google Earth Engine and applications to Harmful Algal Blooms (HABs) 2024/2025 Welcome to the third practical for GEOM184 - Open Source GIS. Today we will start working with Google Earth Engine (GEE) with the applied task to estimate Harmful Algal Blooms (HABs) in Lough Neagh using remote sensing. Today, we will focus on familiarising with GEE, import remote sensing data, and apply simple func- tions for the estimation of Cyano-HABs indices. We will use the JavaScript version of the Earth Engine (which is how it was originally developed) . Although this  means  learning  a  new  coding  language,  JavaScript.  is  fairly  intuitive  and  working through this practical will help you familiarise with the basic operations.  Also, remember that if you feel unsure, you can utilise LLMs for support with code editing (for which they normally perform quite well) . Before starting, you need to make sure that you have access to the  Earth  Engine:  on https: //code.earthengine.google.com/register log in with a Google account (or create it, if you don’t have one) . Then, proceed to Register a Noncommercial or Commercial Cloud Project, select Unpaid usage (as you are using this for education purposes), and select Academia and research from the dropdown menu.  Then select  Create a new Google Cloud Project and continue to summary. Check that everything is working and your account is now ready for use. Important:   today  we  will  make  use  of  remote  sensing  indices  derived  from  the existing literature.  For some references that you can use for your analysis please refer to the reading list on ELE at https://ele.exeter.ac.uk/course/view.php?id=21417§ion=8. 1    Part A - Getting started and loading remote sensing In this section, we  prepare our environment and  load  remote  sensing data,  applying filters and extracting several time frames. 1.1    Getting started First, because Google Earth Engine (from now on GEE for simplicity) is cloud-based, we will work on the online code editor.  Head to https://code.earthengine.google.com/ and log in with your credentials. Whenever possible, save your script by clicking on the Save drop-down menu and choosing Save as. . . . As you may know from previous courses that you attended, commenting is an essential part when using code-lines.  Try and use proper commenting while working in the code editor  (comments in JavaScript can be made by using //) . At each step we are going to take in this practical, you can run your code to see if your results are displaying correctly and/or error messages appear. As a first step, we want to include a shapefile representing Lough Neagh in our working environ- ment, which is available from your coursework material.  To add it to GEE, click on the tab Assets on the left-hand side of the code editor and from the dropdown list New select Shape files.  Select your files (you may need to exclude the file ending with  .qmd extension) and then click on Upload. This may take a few seconds, but shortly your new asset should appear. In your editor window, you can now import your asset and include it within your working environ- ment: //  Import  Lough  Neagh  polygon var NI = ee.FeatureCollection( ”projects/ee-YOURNAMEHERE/assets/LoughNeagh ”).geometry(); Be mindful to adjust the pattern above, as it needs to reflect your username and working folder within the code editor.  To visualise  it within our working environment, we can add the following lines: Map.addLayer(NI, {}, ”LoughuNeagh ”); Map.centerObject(NI, 10); //  Focus  on  the  region  of  interest In essence, the command Map .addLayer will add an element to the map, in our case the variable we defined as purpleNI, whilst  ”Lough  Neagh”  it  is  simply the  layer name as it appears on the layer manager of GEE map. The next line is simply a command to centre our map to the area of interest, with a zoom level of 10:  make  it  bigger  (e.g.,  12) and it will zoom in, make it smaller (e.g., 8) and it will zoom out.  If you now run the code, you should be able to see the Lough Neagh polygon overlain to the GEE basemap. Task 1 Is the Lough Neagh polygon appearing on screen?  If you open the layer manager in the main map, can you change the transparency? Nevertheless, the particularly complex nature of this shapefile may have detrimental effects on our calculation (so much that it could exceed GEE allocated memory for user usage) .  Therefore, it might be easier to use a bounding box from our shapefile to perform most analysis, and then clip our result to the more complex shapefile. To do so, we add: //  Compute  bounding  box  (rectangle)  from  Lough  Neagh  polygon var bbox = NI.bounds(); 1.2    Load remote sensing data We now want to import remote sensing data.  We will work with multi-spectral imagery of Sentinel- 2.  First,  let’s analyse the dataset:  in the search  bar, type Sentinel-2 and when results are loaded select Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A; this is the Sentinel-2 har- monised datasets for surface reflectance, which is suitable for the type of analysis we will be making. In general, you can look up any type of remote sensing that is available in GEE in this way, so you will know whether you can utilise such catalogue or not.  Next, we can add the full catalogue to our environment: //  Import  Sentinel-2  level-2A  harmonised  dataset var S2 = ee.ImageCollection( ”COPERNICUS/S2˙SR˙HARMONIZED ”); Warning: due to encoding, sometimes copying and pasting directly from this PDF may not show special characters (e.g., the underscore in the above example).  Double check that this appears when you paste your code and edit where necessary. In this line of code we defined the variable S2 as the collection of Sentinel-2 Level-2A harmonised data.  However, this means that the entire dataset of Sentinel-2 for the whole world is available, and we will need to filter these data in space and time. The first filtering that we can perform is to restrict our analysis to Lough Neagh.  To do so, we can use several filtering options: var S2   NI = S2 .filterBounds(bbox) This function applies a filter that a collection that intersects a geometry, in our case the bounding box of Lough Neagh.  Next, as we are  using multi-spectral data, we want to filter out tiles where cloud cover is excessive.  To do this, we simply attach to the  previous  line of code the following (after starting a new line): .filterMetadata( ' CLOUDY˙PIXEL˙PERCENTAGE ' ,  ' less˙than ' , 20) //Cloud  cover  filtering   By changing the numerical value, you will be able to change the percentage filter level that we are applying. 2    Defining indices and time filtering 2.1    Define time frames Here, we will define the time frame in which we want to perform our analysis.  In this practical, we will limit our analysis to the 2020-2024 period and for quarter intervals. var years = ee.List.sequence(2020, 2024); var quarters = ee.List.sequence(1, 4); These two functions only create sequences of integer numbers:  in the first case from 2020 to 2024, and in the second case from 1 to 4 . Next, for each quarter we need to get a representative value for each pixel. To do so, we will take the median value for each band, for each pixel: //  Define  a  function  to  calculate median  for  a  given  time  period var getQuarterlyMedian = function(startDate, endDate) { var quarterlyImage = ee.Image(S2 NI.filterDate(startDate, endDate).median()).clip(NI); return quarterlyImage; }; This is a preparatory function that will then be called within a loop.  The function simply estimates the median value for each pixel over each quarter.  Interestingly, notice that we added  .clip(NI): this will make sure that after the analysis, each output will be clipped to the Lough Neagh polygon. Task 2 Can you think why we would prefer using the median here?  Can you create a similar function for, e.g., average? 2.2    Create a function for environmental indices We can now start creating our fist function to calculate remote sensing-based indices.  We will start with a simple one:  NDVI. GEE has in-built function to calculate NDVI, simplifying the process. //  Define  a  function  to  calculate median  NDVI  for  a  given  time  period var getQuarterlyNDVI = function(image) { var ndvi = image.normalizedDifference([ ' B8 ' ,  ' B4 ']); return ndvi.clip(NI); }; What we have done here is creating a new function getQuarterlyNDVI to be executed later; the content within curly brackets { } is the function that will be executed when getQuarterlyNDVI is called; the image .normalizedDifference is used to compute the normalised difference of two variables for the image within GEE to which it is applied. The image .normalizedDifference is an in-built GEE function that effectively computes  x+y/x-y , where (for NDVI) x is B8 near-infrared NIR and y is B4, Red. Once again, please note our clipping at the end of the defined function, as this means that the image is explicitly clipped after all operations have been performed. We are not really visualising anything here:  rather, we are preparing some key-functions for our next set of analysis. Task 3 Using the same code structure as above can you generate a function for the  Normalised Difference Cyanobacteria Index (NDCI), e.g., according to Lomeo et al. (2024)? Of course, we can create custom functions that are not just pre-built into GEE. This could be the case for the Cyanobacteria Index (CI) as defined by e.g., Mishra et al. (2019), which is:   where R , G, and NI R are the red, green and NIR bands, respectively, and λ indicates the middle wavelength for each band. This can be implemented into GEE as follows: //  Define  constant  values  for wavelengths var l3 = 560; //  Green wavelength  (nm) var l4 = 665; //  Red wavelength  (nm) var l8 = 842; //  Near-infrared wavelength  (nm) //  Define  the  function  to  calculate  CI var calculateCI = function(image) { var CI = image.expression( ' -uB4u+uB3u-u(B3u-uB8)u*u(l4u-ul3)u/u(l8u-ul3) ' , { ' B3 ' : image.select(' B3 '), //  Green  band ' B4 ' : image.select(' B4 '), //  Red  band ' B8 ' : image.select(' B8 '), //  Near-infrared  band ' l3 ' : l3, ' l4 ' : l4, ' l8 ' : l8 }); return CI.rename(' CI ').clip(NI); }; Take a moment to quickly examine the above: apart from defining the wavelength values, we have defined a new function, i.e., CI, using the image .expression that uses a pixel-wise computation. Task 4 Using a similar approach, can you define the ABDI index by Cao et al. (2021)? This will be necessary to compare different indices in your analysis. Also note the type of bands used by Cao et al. (2021) and their spatial resolution. 3    Part C - Analyse and visualise the map elements We now need to run our functions that we have created for each quarter and add each iterated value to the map. We will start with NDVI only, then you will add the other indices on your own. 3.1    Create palettes To visualise colour scales correctly, we need to define palettes. A simple one for NDVI could be: var paletteNDVI = [ ' red ' ,  ' white ' ,  ' green ']; var visNDVI = {min: -1, max: 1, palette: paletteNDVI}; which is, in essence, a gradient of red to green varying between -1 and 1 (which are the min and max values of NDVI that can be obtained) Task 5 Can you create similar palettes for the other indices that you have defined?  Bear in mind the potential maximum and minimum values that each index can achieve and design your palettes accordingly. 3.2    Create loop for quarterly calculations We are now ready to calculate the median pixel-wise NDVI index for each quarter, and to add it to the map for visualisation. We can use the following loop: years.evaluate(function(yearsList) { quarters.evaluate(function(quartersList) { yearsList.map(function(year) { quartersList.map(function(quarter) { var startDate = ee.Date.fromYMD(year, (quarter - 1) * 3 + 1, 1);  var endDate = startDate.advance(3,  ' month ').advance(-1,  ' day '); var label =  ' Q '  + quarter +  ' - '  + year; var image = getQuarterlyMedian(startDate, endDate); var ndvi = getQuarterlyNDVI(image); Map.addLayer(ndvi, {min: -1, max: 1, palette: paletteNDVI}, label +  ' -NDVI '); }); }); }); }) Run the code and observe the results appearing on your map.  Under the layer manager, you can select or deselect any quarter of your choice.  You may also get some errors whilst running it.  This may occur if you have set up a too conservative filter (e.g., cloud cover  0, i.e., there is abundance of green colour being shown? What does this mean in the context of Cyano-HABs? Now, perhaps you may want to try a different index, to see what information you can obtain. Task 7 Either by simply changing the name on the loop, or by adding new lines with the indices that you want to test, edit the loop to include, e.g., NDCI, CI, ABDI. Which one seems to give you the right type of information?  Any indication that HABs were occurring at any one time? 4    Conclusions This is the end of Practical 3 .  By now, you should be confident in using Google  Earth Engine for loading remote sensing data, filtering in space and time, and perform some index analysis.  Next week we will include a few more interactive elements, as well as the ability to produce plots from the images generated in the code editor. A few points for reflection, considering that this material will be used for your coursework:       •  Are quarters the right temporal frequency? Can we use something different (e.g., months)? •  Some indices are suggested, but there are many available in the literature:  can you try and test a few? •  We limited our analysis between 2020-2024, can we try and increase our time range to, e.g., 2017-2024? • Although we will work on this next time, can you start thinking of some of the quantitative analysis that you can show through these initial results?  

$25.00 View

[SOLVED] BLDG7003 Building Studies Assignment 1 Autumn 2025

BLDG7003 – Building Studies Assignment 1 Autumn 2025 Due Date: 28 March 2025 Assessment Questions Question 1(a) – 6 Marks Building surveyors/certifiers have traditionally relied on regulations to improve safety in buildings. Are regulations on their own sufficient? If not, why is this so? Provide examples to support your claims. Question 1(b) – 12 Marks What are the main regulatory requirements that Australian jurisdictions have introduced to improve swimming pool safety? Briefly outline programmes that organisations have implemented to heighten awareness of the hazards and mitigate swimming pool risks. Have these initiatives been successful? What additional improvements are needed? Your answer must include relevant evidence to support your claims. Question 1(c) – 12 Marks Fire safety measures must be regularly maintained so that they are capable of performing as expected in an emergency. Detail the evidence that demonstrates that regular maintenance improves the reliability of these measures. Identify who is responsible for their maintenance and discuss the reasons why this arrangement does or does not work. What main elements should be included in a fire safety maintenance programme?  · To maximise the marks that your submissions receive, you must ensure that: i). You understand the question and that your submission answers the question; ii). Your arguments are logical, supported by appropriate evidence and well presented; iii). The submission is your work only, except where credit is given for quotes or reliance on information from research literature; iv). References are included in the text and at the end of each question, and v). You carefully read your answer and ensure that the grammar is correct and spelling mistakes are eliminated  

$25.00 View

[SOLVED] GEOM184 - Open Source GIS Practical 4 2024/2025

GEOM184 - Open Source GIS Practical 4 Google Earth Engine and applications to Harmful Algal Blooms (HABs) Welcome to the fourth practical for GEOM184 - Open Source GIS. Today we will continue working with Google Earth Engine (GEE) and the Harmful Algal Blooms (HABs) problem. Today, we will mostly focus on including additional elements to our GEE visualisation and how to produce quantitative plots of the indices that we have defined last week.  We will also  learn how to export the data produced, and to publish our analysis as an app for public use (this is not a requirement for your assessment, but it is a really helpful tool) .  Remember that if you feel unsure, you can utilise LLMs for support with code editing (for which they normally perform. reasonably well) . Important:  we will use the code that we have generated last week, so if you are not up to date with it, this is the time to complete it!  Also, please note that due to formatting issues, some of the code lines may need little tweaks, e.g., missing underscores. 1 Part A - Creating a split panel In this section, we will learn on how to create a split panel:  this will give us a chance to visualise side-by-side two indices of our choice (or more, by switching between layers) .  In this way, we will get a true understanding of the occurrence of Cyano-HABs in Lough Neagh.  We will keep most of the code that we produced last week, but we will need to make some changes within the loop and add a few lines just before the loop itself. 1.1 Remove some of the old lines We need to remove a few lines to get the split panel to work. You can remove (or, if you prefer, comment out by using //) the following: Map.addLayer(NI, {}, ”LoughuNeagh ”); Map.centerObject(NI, 10); 1.2 Create two panels Now, we want to create two map instances, i.e., we are creating two separate map environments. To do this, include the following lines of code within this subsection before the loop, (i.e., the part of the code beginning with years.evaluate): //  Create  two map  instances var map1 = ui.Map(); var map2 = ui.Map(); This code creates two map instances within the GEE environment, and each map environment (i.e., map1 and map2) can have elements added to it independently.  The ui.Map() is a function of GEE that serves the creation of an interactive map.  Each of the newly defined maps will work independently, unless explicitly linked (which we will do soon) . Next step, we centre both our new maps to Lough Neagh, so we add: //  Set  center  and  zoom  level map1 .centerObject(NI, 10); map2 .centerObject(NI, 10); You can notice that there is very little difference between these lines and the one we deleted earlier, except for explicitly mentioning map1 and map2.  Now, the next step is to add the maps to the user interface, where we assign equal width to both maps (and 100% height) .  To do so  include the following lines: //  Add  the maps  to  the  user  interface var panel1 = ui.Panel({ widgets: [map1], style.: {width:  ' 50% ' ,  height:  ' 100% '} }); var panel2 = ui.Panel({ widgets: [map2], style.: {width:  ' 50% ' ,  height:  ' 100% '} }); The ui .Panel function  requires two arguments:  widgets  (where we select the  map  for each panel), and style where we can change the appearance (and in our case decide width and height of the panels) . Next, we add the panels to the split panel: //  Add  the  panels  to  a  split  panel var splitPanel = ui.SplitPanel({ firstPanel: panel1, secondPanel: panel2, wipe: true }); And we can now add the split panel to the user interface: //  Add  the  split  panel  to  the  user  interface ui.root.clear(); ui.root.add(splitPanel); All of the above is great, but now we need to make sure that the two panels are linked to each other, so that if any operation is performed (zooming in and out, panning, etc.)  will be observed in both panels. So we add a map linker instance: //  Create  a  Map  Linker  instance var mapLinker = ui.Map.Linker([map1, map2]); 1.3    Adapt the loop All is ready now to visualise a split panel with the indices of your choice.  However, to do so we need to slightly adapt our loop.  For example, last week we used the following for NDVI: Map.addLayer(NDVI, {min: -1, max: 1, palette: paletteNDVI}, label +  ' ˙NDVI '); Assuming we want on one side NDVI and on the other side another index (e.g., ABDI), we can remove (or comment out with //) the above code and add instead: map1 .addLayer(NDVI, {min: -1, max: 1, palette: paletteNDVI}, label +  ' ˙NDVI '); map2 .addLayer(ABDI, {min: -1, max: 1, palette: paletteABDI}, label +  ' ˙ABDI '); Remember that any index you are adding within the loop needs to be defined in the same way as var  NDVI  =  getQuarterlyNDVI(image); and this needs to occur before you add the new layers to either map1 or map2.  Now, try to run your code and see how it looks with the results. Task 1 Can you move the slider left and right?  Are the two maps joined together in the same place and remained joined even when zooming in and out?  Can you tick and untick layers for quarters of your choice for each map?  (hint:  you  may have to move the slider all the way to the right to open the layer manager for the left map) 1.4    Explore visualisation of different indices Now that you have two maps to visualise your results, we can use them to make a comparative analysis between quarters. You may want to focus on Summer 2023 as a prime example oh Cyano- HABs observations  (especially due to the exceptionality of the event) .   One way to do it is to analyse indices pair-wise.  It  is of course possible to add more indices to your maps.  This is up to you in what way you want to perform your analysis.  For example, you may want to add NDVI and NDCI for map1 and CI and ABDI to map2 - to do so, just follow the same steps as above and make sure to add layers to the map of your choice. Task 2 Can you see any sign of Cyano-HABs in the map? Can you refer back to the interpretation of these indices in the literature (e.g., NDVI>0 indicates vegetation, and the nearer it gets to 1, the healthier the vegetation, and so if you would expect vegetation in the middle of a lake)? 2 Part B - Statistical analysis of indices To have a full understanding and a quantitative assessment of Cyano-HABs in Lough Neagh, we may need to consider a (relatively basic) statistical analysis of data that can quantify Cyano-HABs occurrence, for which we  may want to create  plots directly within  GEE. Of course,  you could download individually the maps that you have created and analyse each image within QGIS, but this would not be a good use of your time and of computational resources.  To this end, we need to edit our code in two steps: just before the loop, and within the loop. 2.1 Preparation for the loop This step considers that before the loop we need to introduce some empty lists that will be populated by the numerical values generated at each iteration. Assuming we are using NDVI and ABDI, the first part of the code to include is: //  Lists  to  hold  NDVI  and  ABDI  stats var ndviValues = []; var abdiValues = []; This simply adds two empty lists where we will store the information for both NDVI and ABDI. 2.2    Amending the loop We can now make a few changes within the loop to produce an output of our statistics in the form of a plot that includes mean and set percentile values for NDVI (and ABDI), for each quarter. In the most nested part of the loop (just after defining variables NDVI and ADBI) we add: //  Compute  statistics  for  NDVI var ndviStats = ndvi.reduceRegion({ reducer: ee.Reducer.mean() .combine(ee.Reducer.percentile([5, 95]),  ' ' , true), geometry: NI, scale: 10 }); The newly defined function ndviStats is the one that contains all statistical values that we are using for our analysis.  The  method utilised for this function is the so-called reduceRegion, and we then define a reducer: this is an identification for the operation that we want to perform.  For example, the first argument is ee.Reducer.mean(), which simply means that we are calculating the mean value of NDVI for the whole of Lough Neagh for each quarter (don’t forget that this operation lies within the most nested part of the loop, meaning that each operation is repeated for each loop iteration, i.e., for every quarter) .  Then, ee.Reducer.percentile is for us to define the NDVI value for that quarter across the whole of Lough Neagh that is, respectively, 5% and 95% of the overall observations. Other helpful percentiles that we could potentially use are 25% and 75% (you may known as the interquartiles) - you can try one and then the other to see what makes the most statistical sense. Task 4 Can you add the function computing statistical indicators for ABDI (and/or any other index) to this? Next, is to add the statistical values we have calculated to our empty lists (remember we defined them earlier?): //  Add  statistics  to  the  lists ndviValues.push(ee.Feature(null , { label: label, Mean: ndviStats.get(' nd˙mean '), p5: ndviStats.get(' nd˙p5 '), p95: ndviStats.get(' nd˙p95 ') })); Here, be mindful of the names within the function arguments: values on the left hand-side are the newly defined values, whist on the righ-hand side are being pulled from the ndviStats variable that we defined earlier.  If you use different percentiles, you will need to edit this. Important:  please be mindful of the nomenclature used to call the variables stored by ndviStats:  we use the name nd because NDVI was generated using the in-built function of normalised difference within GEE. For any other index that does not use a normalised difference (e.g., ABDI), you will need to use the name that you have renamed when defining calculateABDI (or similar), for which we may have used return  ABDI.rename( ' ABDI ' ) .clip(NI); - therefore we need to replace nd with whatever name we included within ” for calculateABDI. Task 5 Can you repeat the same operation for ABDI (and any other index you have included) to this?  Please bear in mind the type of function you used (i.e., a normalised difference or a custom function) and the name you may have assigned (see above) . One final step and we are ready to add mean NDVI and ABDI values as a plot box.  To do this, we need to jump two levels of the loop, meaning that the following lines of code need to be placed within the quarters .evaluate level of the loop, i.e., after the whole set of quartersList .map and yearsList .map iterations have been completed within the loop: //  Create  NDVI  Chart var ndviChart = ui.Chart.feature.byFeature(ee.FeatureCollection(ndviValues),  ' label ' , [ ' Mean ' ,  ' p5 ' ,  ' p95 ']) .setChartType(' LineChart ') .setOptions({ title:  ' NDVIuoverutime ' , hAxis: {title:  ' Quarter '}, vAxis: {title:  ' NDVIuValue '}, lineWidth: 1, pointSize: 4 }); Note that in the first line of this code within the square brackets we included the statistical quantities we defined earlier.  You will  need to edit this  if you change the percentiles.  The meaning of the elements inside the other lines of code is relatively simple:  we are plotting a chart using features of a collection, and we are setting options within the chart - feel free to edit any of these to suit your design plan. Task 6 Can you add ABDI (and any other index) to this?  Would you try to  change some of the visualisation options? Finally, just after these new lines, we can finalise the creation of a panel within GEE environment: //  Create UI  panels  for  the  charts var ndviChartPanel = ui.Panel({ widgets: [ndviChart], style.: {width:  ' 500px ' , height:  ' 200px ' , position:  ' bottom-left '} }); //  Add  the  chart  panels  to  the map  panels map1 .add(ndviChartPanel); Task 7 Can you add ABDI (and any other index) to this?  Bear in mind that if you followed this approach so far of adding  NDVI to  panel  1  and  ABDI to  panel  2,  ABDI will  be  on the right-hand side panel, so you may want to change the style options! If you run your code now, you should be able to visualise the plots appearing on screen, indicating the mean and any other percentile that you may have used for your analysis.  If you want more details on these plots, you can click on the icon in the top-right of each plot box, which will open a new window with your plot.  In this  new window you can save the plot as it is, or download the data. 2.3    Analysis and anomaly detection As we want to understand how severe Cyano-HABs have been for Lough Neagh, the graphs can already tell us some interesting stories. Task 8 Can you visually see any pattern with the indices?  Any  indication that something unusual occurred in 2023?  Make sure you comment or add information about this. An even better approach than just visually observing trends is to use a simple Z-score approach. This is given by the ratio: Where Z is an individual value (e.g., the mean NDVI for a quarter of choice), µ is the average of all these values (e.g., the average of all mean NDVI across all quarters) and σ is its standard deviation. Typically, any value >1 or 

$25.00 View

[SOLVED] TEE3331 FEEDBACK CONTROL SYSTEMSC/C

TEE3331  -  FEEDBACK CONTROL SYSTEMS Instruction Manual for Experiment 1 Design and Simulation of Feedback Control Systems Objectives 1.  To learn to use MATLAB 2.  To design P-controller and PI-controller for a 1st  order plant 3.  To study through simulation the efects of controller gains on closed loop response 1    Background 1.1    Introduction MATLAB is a high-level programming language and interactive environment that al- lows the user to perform computationally intensive tasks with less efort compared to traditional programming languages such as C and C++.  It ofers the facility to obtain numerical solutions of diferential equations and matrix manipulation, to visualize data through 2-D and 3-D graphics, to implement an algorithm, to create suitable user in- terface, to interface with other programming languages etc.  Add-on toolboxes extend the MATLAB environment to solve particular classes of problems in specific applica- tion areas, such as, signal processing, image processing, control design, communications, financial modeling and analysis, etc.   Figure 1: Screenshot of MATLAB. In this simulation exercise, you will use functions that come with Control Systems Tool- box of MATLAB. A list of useful functions is provided in this manual.  You will need them at diferent steps of control design and simulation. You may also use other function not listed in this manual.  Exactly which function is needed depends on what steps you follow to design the controller.  General guideline for design of the controller is given in the section titled ‘Design and Simulation’. 1.2    Familiarization with MATLAB Commonly used commands and functions are given in this Section.  Brief explanation is also given on how to use each of these commands and functions.  You don’t have to practice all commands and functions given in this section.  This section is for reference only.  In the next section and in subsequent sections, you will use functions required to simulate response of dynamics systems and to design feedback system. a.  Command Prompt   MATLAB Comman prompt is >>.   Type  a=5  at the command prompt  and press ENTER. What do you see? MATLAB prompts back the value of the variable a.   Type b=3; at the command prompt and then press ENTER. This time MAT- LAB doesn’t prompt back.  However, the variable ’b’ is still there in MATLAB workspace.     Type who at the command prompt and then press ENTER. MATLAB will list all variable in the workspace. For the above exercise, they are a and b. b.  Entering Matrices: You can enter matrices by using square brackets. For example, at the command prompt, type x = [1   10;  2   5] and then press ENTER. This will create variable x in the workspace with the following value   Rows of the matrix are separated by ;.  You must give space between any two ele- ments of a row. A row vector can be defined as y =  [1  10 15] and, similarly, a column vector as z = [1;10;15]. The command transpose(m) returns the transpose of matrix m.  So z = [1;10;15] is identical to z = transpose([1 10 15]). Entering v =  [2 : 1 : 10] creates a vector with elements of increasing value.  The first element is 2 and the last element is 10; increment between adjacent elements is 1.  So v = [1 : 1 : 10] generates a row vector v = [1 2 3 4 5 6 7 8 9 10] . Vector with elements having increasing value can also be generated using the function linspace. If you type v = linspace(1, 10, 100), MATLAB returns a vector v with 100 elements whose first element is 1 and the last element is 10. v = [10 : -2 : 2] generates a vector with elements decreasing by 2, i.e., the vector v = [10 8 6 4 2]. c.  Basic Operations   Add or Subtract:  x = A+B, y = A-B. Make sure that dimensions of A con- forms to the dimensions of B.   Product: x = A*B; [dimensions must agree].    Division: x = A/B; [dimensions must agree].   Inverse: x = inv(A); [A must be a square matrix]   Element by element Product: x = p.*q; d.  M-files:   MATLAB  can  execute  a  sequence  of statements  saved  in  a  file.   Such files are called ”M-files” and must have the file type ” .m” as the last part of their filename.  There are two types of M-files, scriptfiles and functionfiles.  For this lab, you will use script file only.  You can create a script file using MATLAB editor.   To open the editor, go to File → New → Script.  The editor window appears with a blank page.   In the editor window, type in the commands that you would like to execute.   Save the file with a name given, i.e., abc.m     You can later execute all commands or function written in the file by typing the file’s name (abc) at MATLAB command prompt (>>) and pressing the ENTER key. Exercise: i.  Open editor window ii.  Type the following commands, a = [1 10;2 1]; b = inv(a); iii.  Save the file as abc.m iv.  Go to MATLAB command window, type abc at the command prompt and press ENTER e.  Graphics:   the command plot(x) opens a figure window that shows the plot of the vector x.   the  command  plot(x1, x2)  shows  the  plot  with  the  variable  x1  along  the horizontal-axis and x2 anong the vertical-axis.   the command grid creates grid lines on the plot. Type the following at the command prompt, t = [-1 : 0.1 : 1]; y = 2 + 3 * t; plot(t, y), grid f.  Control related functions:   Define a transfer function- Num = [1 10]; Den = [1 3 5]; G = tf(Num, Den); the  function  tf  generates  the  transfer  function  with  the  given  numerator (Num)  and  denominator  (Den).   If  you  type  G  at  the  command  prompt and press ENTER, matlab will show the following .  Define a transfer function with transportation delay- Num = [1 10]; Den = [1 3 5]; td = 0.1; G = tf(Num, Den,0 InputDelay0 , td); If you type G and press ENTER, matlab will show the following   Simulate step response - –  The command step(G) generates step response of the system described by G and show it in a figure window. –  You can specify the time vector before generating the step response. t = [0 : 0.01 : 10]; step(G, t); where the transfer function G has already been defined. –  These two examples of step do not create a output vector in the workspace. It simply shows the response in a figure window.  If you want to make the output vector available in the workspace, use the following: t = [0 : 0.01 : 10]; y = step(G, t); –  If you type  [y, t]  =  step(G),  MATLAB  generates  step  response  of  the dynamic system defined by G using automatically generated time vector and return both t and y. –  You can specify the duration of step response using [y, t] = step(G, Tend), where Tend is the end time of simulation.  MATLAB generate the time vector t automatically.   Root Locus - The command rlocus(G) creates the root locus of the trans- fer function G already defined.   Note that for system with delay, you need to approximate the delay using Pade Approximation before you can use the command.   Generate Bode plot –  The command bode(G), where G is a dynamic system already defined, shows the Bode plot in a figure window. –  You can specify the frequency vector ! before generating the Bode plot and use bode(G,!) instead. –  If you want the results to be available in the workspace for further pro- cessing, use [mag, ph] = bode(G,!) . It creates two vectors - mag for the magnitude of the Bode plot and ph for the phase. –  Instead  of user defined frequency vector, the Bode plot can be generated for MATLAB-generated frequency vector using  [mag, ph,!] = bode(G) .   Obtain closed loop system from the open loop - If G and H are the forward path transfer function and feedback path transfer function, respectively, you can  determine the transfer  function  system  of the  closed  loop  using  the  fol- lowing command Gc = feedback(G, H); The command Gc = feedback(G, 1) returns the closed loop transfer function under unity feedback. 1.3    Selecting Gains of the PID Controller The PID controller generates control input according to the following, Taking Laplace transform, where,  and   So the transfer function of the PID controller is, For  a  first-order  plant  transfer  function,  the  denominator  of  the  resulting  closed  loop transfer function is quadratic function of s.  One can find the gains such that closed loop poles have desired narural frequency and damping factor.  However,  the closed loop will also have zero which will make the response deviate from the expected response of the designed  closed  loop  poles.    For  plant  with  higher  order  transfer  function,  finding  the PID gains become more difficult using this approach.  Ziegler-Nichols tuning  of the PID controller is a widely used method for tuning of the PID gains.  There are two di↵erent ways of finding the gains using Ziegler-Nichols tuning.  In this experiment, you will learn one of them, namely, the method based on the process reaction curve. 1.4    Ziegler-Nichols Tuning Step responses of a  large  number  of process  control  systems  exhibit  a  process  reaction curve which can be generated from experimental step response data.  If a tangent is drawn at the inflection point of the s-shaped reaction curve  (shown  in  Figure  2),  the  slope  of the  line  is  R = T/A and the intersection of the tangent line with the time-axis identifies the time delay L = td.  The controller parameters are designed to result in a closed-loop step response with a decay ratio of approximately 0.25.  This means that the transient decays to a quarter of its value after one period of oscillation. Figure 2: Reaction curve. Ziegler-Nichols tuning formulae for diferent controllers are given in Table 1. Table 1: Ziegler-Nichols Tuning Formulae for Decay Ratio of 0.25 2   Hands-On Exercise You will use MATLAB to simulate responses of a plant and also under closed loop control using P-control and PI-control. All observations must be noted down and responses are to be printed.  Printouts must be attached with the report to be submitted at the end of the session. You are also required to submit the list of all commands and functions used. Write these commands and functions in a script file and take printout at the end of the session. 2.1    Open Loop Response of a First-Order Transfer Function 1.  You will be given the actual plant model during the laboratory session. 2.  Create a new script file.  Open MATLAB editor and enter the following lines: A = xx; tau = xx; td = xx; The plant can then be generated using the following command: Gp = tf(A,[tau 1],0 InputDelay0 , td); These lines define the following 1st-order plus delay transfer function, Add the command [y, t] = step(Gp, Tend); to simulate step response, and the following line to plot the step response. figure(1), plot(t, y), grid, title(0 Step response of plant0 ); Tend is the end time of simulation duration.  Choose the value for Tend such that relevant features of step response, e.g., delay, transient and steady state are clealry visible in the plot. The plot command draws y as a function of t, the command grid creates grid lines on the plot, title adds a caption to the plot.  The function figure(1) opens a figure window and labels it as number 1. Save the lines you wrote in the editor as a script file.  Let the name of the file be abc.m 3.  Type abc at the command prompt.  It will execute all commands included in the m-file abc.m. 4.  Take a printout of the response, which is the step response of the plant Gp . Estimate the first-order plus delay plant model from the printout. From next section onwards, you will use this estimated plant model. 2.2    Study the Efect of Kp  of the P-Controller In this section, you will simulate closed loop response when the plant is put under feed- back control using a proportional controller.   The controller produces a signal proportional to the error, i.e., u(t) = Kpe(t), where e(t) is the error and Kp  is the proportional control gain, The transfer function of the proportional controller is, The closed loop transfer function is, 2.2.1    Closed loop step response Use the step response obtained earlier to determine the proportional control gain Kp  using the Ziegler-Nichols tuning formula for P-controller.  Let this value be x. You can use the following functions to simulate closed loop response Kp = x; Gol = series(Kp, Gp); Gcl = feedback(Gol, 1); [y 1, t1] = step(Gcl, Tend); The first one of these four lines (Kp  = x) defines the controller transfer function. The second line defines the open loop transfer function formed by series connection of controller (Kp ) and plant (Gp ).  The third line finds the closed loop function for unity feedback, and the fourth line simulates the step response of the closed loop. Simulate closed loop response with two other gains, one greater than x and one smaller than x.   For example, you may choose xh =  1.25x and xl  =  0.75x, re- spectively.  Let the outputs be y2 and y3, respectively for these two gains and the corresponding time arrays are t2 and t3.   Show the closed loop step responses for all three gains on a single plot. figure(2), plot(t1, y 1, t2, y2, t3, y3), grid; The function figure(2) creates a new figure window and plots step responses in this new window. Add a title to the figure. title(0 Closed loop step response with P - control0 ); Take printout of this figure.   From  the  plots  of the  closed  loop  step responses, determine steady-state error and rise time.  How does the steady-state error vary with changing gain?  Does it increase or decrease?  What about response time? You may change the gain (Kp ) to few other values to verify your observations. Include these observations in your report. 2.2.2    Bode plot   Generate Bode plot data for the plant: Use the following commands to generate Bode plot of the plant. [mag, php, w] = bode(Gp); dbp = 20 * log10(mag); The first function generates Bode plot data for the plant described by Gp using MATLAB-generated frequency vector. The function also returns the frequency vec- tor w. The second line gets the dB magnitude.   Generate Bode plot data for open loop transfer function: Kp = x; [mag, phol1] = bode(series(Kp, Gp), w); dbol1 = 20 * log10(mag); The function series(Kp, Gp) inside the function bode creates the open loop transfer function Gol (s) = C(s)Gp (s) = KpGp (s). Repeat this for another value of Kp > x, say, xh. Kp = xh; [mag, phol2] = bode(series(Kp, Gp), w); dbol2 = 20 * log10(mag); Use the following functions to open a new figure window and show the Bode plots there. figure(3); subplot(211), semilogx(w, dbp(:), w, dbol1(:), w, dbol2(:)), grid; title('Open loop Bode (mag) plot with P - Control'); legend('Plant' ,' gain = Kp' ,'gain > Kp'); subplot(212), semilogx(w,php(:), w, phol1(:), w, phol2(:)), grid; title('Open loop Bode (phase) plot with P - Control'); legend('Plant' ,'gain = Kp' ,'gain > Kp'); The function subplot(mnk) divides the figure window into m × n sub-windows and show the plot in the kth  sub-window.  For example, subplot(211) divides the window into two rows and one column; draws the plot in the 1st  sub-window. Compare bode plots of the plant and of the two open loop transfer functions. Your report must include your observations about the efect of gain Kp  on the open loop Bode plots.   Generate Bode plot data for closed loop transfer function: Kp = x; [mag, phcl1] = bode(feedback(series(Kp, Gp), 1), w); dbcl1 = 20 * log10(mag); The function feedback(series(Kp, Gp), 1) inside the function bode creates the closed loop transfer function for KpGp (s) in the forward path and unity gain in the feed- back path. Repeat this for Kp = xh by using the following Kp = xh; [mag, phcl2] = bode(feedback(series(Kp, Gp), 1), w); dbcl2 = 20 * log10(mag); Then plot them, figure(4); subplot(211), semilogx(w, dbcl1(:), w, dbcl2(:)), grid; title('Closed loop Bode (mag) plot with P — Control'); legend('gain = Kp' ,' gain > Kp'); subplot(212), semilogx(w,phcl1(:), w, phcl2(:)), grid; title('Closed loop Bode (phase) plot with P — Control'); legend('gain = Kp' ,' gain > Kp'); 2.3    PI Controller   Proportional-plus-Integral (PI) Control:  The PI control consists of a proportional gain and an integral gain.  The integral part produces a correcting signal propor- tional to the integral of the error signal, The transfer function of the PI controller is,   Next you will simulate the closed loop response with PI control for diferent values of Kp  and Ti.   Choose the initial gains using the Ziegler-Nichols tuning method for PI-control  (given in Table 1).  Let these values be Kp  = α and Ti  = β .  The following lines simulate the step response with PI-control. Kp = α ; Ti = β ; C = Kp * tf([Ti  1], [Ti  0]); Gol = series(C, Gp); Gcl = feedback(Gol, 1); [y4, t4] = step(Gcl, Tend);   Keeping Ti fixed, increase Kp  to a higher value and repeat the step response simula- tion. Let the output be y5.  Plot step responses in a single figure and take printout, figure(5), plot(t4, y4, t5, y5), grid; Observe the changes in response.  Your report should include these observations and your comments on them. . Generate open loop Bode plot for one set of Kp  = α and Ti  = β .  How does the PI- controller modify the magnitude and phase plot of the open loop transfer function compared to those with P-controller?  These observations are to be included in the report. 3   Report We suggest you write your report in Word, copy your plots and Matlab codes and upload to the LumiNUS ‘Lab1 report upload’ folder. Your report must include the following key issues. 1.  For P-control, the efect of gain Kp  on step response and Bode plots 2.  For PI-control,   efect of integral control on steady-state error   efect of increasing Kp  on the closed loop step response   efect of integral control on the open loop Bode plot Print/Save your file in pdf format.   Name your report using your matric number, i.e. A1234567E.pdf and upload to LumiNUS.  

$25.00 View

[SOLVED] 158326 Software Construction Tutorial 2

158.326 Software Construction Tutorial 2 Create an ASP.NET Core Empty project in C# using Microsoft Visual Studio 2022 based on a modification of the Tutorial 1 Parking scenario, make sure MVC architectural pattern is applied in your implementation. 1. Define 3 classes GeneralParkingKiosk, StaffParkingKiosk and StudenParkingKiosk as follows GeneralParkingKiosk - Properties:                         GeneralHoursParked : Decimal Method  :                             FindGeneralParkingAmount( ): Decimal Rule - $ 2 per hour StaffParkingKiosk - Properties:                         StaffHoursParked: Decimal Method  :                             FindStaffParkingAmount( ): Decimal Rule –$ 2 for the first ten hours. For hours in excess of the ten, staff will be charged $ 2 per hour. StudentParkingKiosk - Properties:                         StudentHoursParked: Decimal Method  :                             FindStudentParkingAmount( ): Decimal Rule - $ 1 per hour. 2.  Add an interface IKiosk as follows: IParking - Properties:                         HoursParked: Decimal – ReadOnly Method  :                             FindParkingAmount( ): Decimal 3. Define three new classes GenKioskWrap, StaffKioskWrap and StudKioskWrap as follows The three classes should a.  Implement IKiosk b.    Encapsulate the classes defined in 1(i.e., The GenKioskWrap class encapsulates the class GeneralParkingKiosk, the StaffKioskWrap class encapsulates StaffParkingKiosk, and so on) 4. Web User Interface Design (screenshots were taken when ‘Calculate’ button clicked) NOTE: Use Math.Ceiling function to convert decimals to the next highest integer in your calculation HINT: Solution Explorer should show program MVC structure of C# classes and other files as the figure below:

$25.00 View

[SOLVED] LIAF105 - Quantitative Methods Web

Coursework 2025/01 LIAF105 - Quantitative Methods This coursework is worth 40% of the overall grade. The deadline for submission on Turnitin is Friday, 28 March, 4:00 PM You must do the assignment individually. Aims: The aim of this assessment is to develop and evaluate data-driven models based on bivariate and multivariate regression models, and to demonstrate ability to apply the coefficient of variation to given data. The coursework allows students to: (1) develop and demonstrate the application of the methods of ordinary least squares regression using Excel. (2) show an understanding of the importance of the coefficient of variation. The assessment will consist of graphs and statistical analysis  within a written report, fully explaining results and findings for each question. This should be between 1000 and 1500 words (excluding figures) and must be typed and submitted as a Word Document, with Excel figures and tables inserted appropriately. You DO NOT have to reach the maximum word count and you may lose marks for forcing your word count up with irrelevant information. Report writing requirements: • There are 11 questions and you should answer all of these separately. • Type your answers to each question in a word document, and number the answers clearly. • Show all relevant Excel regression summary outputs within your answers  and include  relevant analysis / findings / conclusions for each question. • Use references based on all  the literature you have used in compiling  this report. Use the APA referencing system. • Pay attention to the overall presentation and structure, ensuring logical development of ideas. SECTIONS A and B: • DO NOT simply copy and paste AI answers to the questions, as this will be obvious and receive 0 marks. • Structure your work, which should comprise a relevant discussion of your findings within each question, which will include the following over the entire coursework: Summary of the main regression results including (where relevant) the estimated regression coefficients and models, p-values and significance ofF value, coefficients of determination and regression summary analysis. Clear explanation of your regression line graphs and statistical results. Understanding of the coefficient of determination. Hypothesis tests using regression coefficients and interpretation of findings. SECTION C: • Show an understanding of the coefficient of variation and decisions based upon it. Assessment Criteria • Demonstration of competence in the production and presentation of results from Microsoft EXCEL. • Providing appropriate analysis, explanation and interpretation of results, without dependence on AI. • Showing understanding of methods employed in analysis of data. • Structuring and presenting the report clearly in Microsoft Word (including labelling of graphs and tables). Coursework Brief SECTIONS A and B: Samples of consumers in the UK who buy bottled water were surveyed over the course of a year, and this data was condensed. You are required to investigate the corresponding data set, examining the relationship between market Demand for bottled water, and two variables, Price of bottled water and Income of consumers. You will evaluate the significance of the variables within your models with a view to understanding influences on consumer behaviour. In section A, use a bivariate regression model to investigate the following relationships separately: (1) Demand for bottled water and Price of bottled water. (2) Demand for bottled water and Income of consumers. You are expected to analyse the regression results, and comment on your findings. In section B, you are expected to use multivariate regression analysis for Demand for bottled water, Price of bottled water and Income of consumers, and comment on your findings. In section C, you are expected to use the coefficient of variation to analyse the given data, and comment on your findings. For all sections (A, B and C), you may give your answers to 2 decimal places when appropriate; otherwise, use your judgement to give a suitable degree of accuracy, or follow the stated accuracy requirements. Data Download the data from the MS Excel file in Moodle to answer the questions in Sections A and B. The table shows condensed data for demand for bottled water, the price of bottled water, and the personal disposable Income of consumers. Units are not given for the data, and this will not affect analysis. COURSEWORK QUESTIONS: Answer each question separately, clearly showing the relevant question number. No marks will be given for non-specific, generic answers obtained through use of AI Chat services. Give your answers to 2 decimal places when appropriate; otherwise, use your judgement to give a suitable degree of accuracy or follow the stated accuracy requirements. Section (A): Bivariate Regression Analysis [40 marks] 1). Using Excel, plot separate scatter diagrams for the following: (i) Demand for bottled water (Y), against Price of bottled water (x1). (ii) Demand for bottled water (Y), against Income of consumers (x2). Note that Demand should be plotted on the y axis for all graphs in this coursework. Comment on the relationship between the variables in each of the graphs (i) and (ii). [6 marks] 2). Assuming that Demand for bottled water (Y), and Price of bottled water (x1), are linked by a linear relationship, use the regression summary output in Excel to estimate a model for this regression in the form Y = α1 + β1x1, and interpret the value of the gradient. (The full regression summary output should be presented to support your answers). [10 marks] 3a). Find the coefficient of determination, R2, for demand for bottled water and price of bottled water, and comment on its value. b). State whether  there is a significant relationship between Demand and Price by carrying out an appropriate test, using the p-value at a 5% significance level. (The regression summary output in Excel should be used). [7 marks] 4). Assuming that  Demand for  bottled water  (Y), and  Income of consumers  (x2), are  linked  by a  linear relationship, use the regression summary output in Excel to estimate a model for this regression in the form Y = α2 + β2x2 , and interpret the value of the gradient. (The full regression summary output should be presented to support your answers). [10 marks] 5a). Find the coefficient of determination, R2, for demand for bottled water and Income of consumers, and comment on its value. b). State whether there is a significant relationship between Demand and Income by carrying out an appropriate test, using the p-value at a 5% significance level. (The regression summary output in Excel should be used). [7 marks] Section (B) Multivariate Regression Analysis [50 marks] Use multivariate regression analysis to investigate the relationship between Demand for bottled water (y), Price of bottled water (x1) and Income of consumers (x2) : 6). Use the regression summary output in Excel to estimate the linear regression model for Demand for bottled water (y), Price of bottled water (x1) and Income of consumers (x2) in the form. y = α3   +  β3x1  + β4x2  . Interpret the values of the gradients (The full regression summary output should be presented to support your answers). [10 marks] 7). State and compare the estimated coefficient (β1 ) for Price of bottled water (x1 ), in the bivariate regression equation in Section A, Question 2, to the estimated coefficient (β3 ) for Price of bottled water (x1 ), in the multivariate regression equation in Section B, Question 6.  Are the coefficients different? If so, why? Explain your answer, stating whether or not you think it is reasonable to assume that Demand for bottled water depends on both Price of bottled water and Income of consumers. [10 marks] 8). State and discuss the value of the coefficient of determination for the multivariate regression analysis for Demand for bottled water (y), Price of bottled water (x1) and Income of consumers (x2), and compare it to the value of R2  in the bivariate regression analysis found in Question 3, Section A. You must give your values to four significant figures. Giving reasons, state which coefficient of determination is best to use. [10 marks] 9). Perform. an overall significance test to check validity of the coefficients in the multivariate regression in Section B. Briefly discuss the suitability of the regression models used in Section A and B with reference to the appropriate evidence, and hence state which model provides the best fit to the data. Were your findings as expected? (Note: Do not simply re–state all your findings). [10 marks] 10). Apart from Price of bottled water and Income of consumers, what other variables do you think could influence the demand for bottled water in the United Kingdom? Provide factual reasons (rather than personal opinions), with in-text citations and references (APA format) to support your answers. [10 marks] Section (C) Coefficient of Variation [10 marks] 11). You are asked by an investor to analyse the stock risk of two companies: Argo Ltd. and Navis PLC. You   are provided with the sample mean (X) and standard deviation (S) over a five - year period for the stock of both companies, as shown in the table below: Year Stock: Argo Ltd Stock: Navis PLC X1 S1 X2 S2 1996 22.52 5.15 29.76 6.13 1997 24.56 4.93 23.33 5.70 1998 14.57 3.37 18.60 4.81 1999 23.65 7.95 21.09 6.92 2000 24.58 10.18 30.46 5.11 Use the coefficient of variation to state which stock was less risky in each year. Show your method and explain your answers. [10 marks] TOTAL: 100 MARKS

$25.00 View

[SOLVED] Literary and Cultural Studies Contemporary Approaches

Literary and Cultural Studies: Contemporary Approaches BBN-ANG-114/h Mon 08:30-10:00, 423/a Aims of the seminar: This seminar aims to combine the practical analysis of literary works with an introduction to the basic theoretical and historical approaches to literature and culture that students are likely to meet during their studies at this department. We intend to work on the central skills of critical thinking, argumentation, academic writing and research. Grading: Students will be graded on their in-class activity, in-class tests, and a home paper (written and evaluated in three stages!). Syllabus: There is no syllabus. Make sure you know what to do for each week! Materials: Students will be provided with the compulsory and recommended literature via Microsoft Teams. Compulsory reading: Bertens, Hans. Literary Theory: The Basics. Routledge, 2014. Rainsford, Dominic. Studying Literature in English: An Introduction. Routledge, 2014 Recommended reading: Baldick, Chris. The Concise Oxford Dictionary of Literary Terms. Oxford University Press, 2001. Pirie, David B. How To Write Critical Essays: A Guide for Students of Literature. Routledge, 2002. In-Class Tests: Based on the compulsory readings, question sheets and in-class discussions. Home Essay: stage 1) 3 March: Title, thesis statement, and 3+ items of bibliography; stage 2) 31 March: 1-2 pages, full essay;   stage 3) 28 April: 2-3 pages, full essay. Topic: any piece of British literature that you are interested in. Based on: your ideas + research: you have to quote at least 3 books, papers you’ve read on the topic at least 75% of the words should be your own Focused – about one thing only To the point – no long introductions, conclusions are needed Please submit electronically

$25.00 View

[SOLVED] IY427 Information Technology

Assessment Task Information Key details: Assessment title: Written assignment (individual): Practical Programming Assessment Module Name: Information Technology Module Code: IY427 Assessment will be set on: 1st February 2025 Feedback opportunities: Peer feedback on class, tutor feedback Assessment is due on: Sept Cohort: 9:00am Monday 7th April 2025 Jan Cohort 9:00am Monday 30th June 2025 Assessment weighting: 40% Assessment Instructions What do you need to do for this assessment? Task: Develop an ATM simulator using C programming language. Your work should be presented in a report format and must cover all the requirements below. Automated teller machine (ATM) has two input devices: Card reader and Keypad. The ‘Card Reader’ reads the information contained in the card's magnetic strip/microchip. Then ATM software asks the user to insert the correct PIN of the card using the ATM keypad, i.e., PIN is usually a four-digit number. If the two PIN numbers are identical, login is given to ATM services and to the account information. The card's PIN and user account information, e.g., balance, are typically stored in a remote database(s)/file(s). But for sake of simplicity, assume the ATM is only dealing with two pre-initialized cards, Card 1 and Card 2. Card 1 PIN is initialized by 1234, and an initial balance of, £1234.60. Card 2 PIN is initialized by 5678, and an initial balance of, £848.50. Your program will prompt the user to select Card 1 or Card 2. Then it will request the user to insert the card's PIN using the keyboard. Only three attempts are given to the user to insert the correct PIN. If the user attempts are exceeded, then the card will be blocked automatically. The ATM program should output a message that the card has been retained and prompt the user to contact the bank. If the PIN code is entered correctly, the user should be able to: • Change the selected card PIN. • Check the balance. • Withdraw money as a multiple of £5, £10 and/or £20 if the available balance is enough. (Note: This updates the selected card balance.) • Deposit money (any amount) into the account. (Note: This updates the selected card balance.) • Eject the selected card and select another one without quitting the program. • Quit the program and return the card. In each operation, the user should have the option to see or not to see a receipt on the screen showing the original balance and the transaction type and the new balance after the transaction. The user must be able to do as many transactions as desired after completing each transaction by having the option to quit or return to the main menu. All transactions performed in a single program should be saved in a text file in their order of occurrence, i.e., the transactions' receipts information with their order of occurrence. Ensure clear prompts to users, assuming they have no understanding of how the program functions. Your programs should not exit unexpectedly. Guidance: For this assessment, you should make use of the following themes/activities that you have already completed. These activities have been designed to support this summative assessment: • Number Systems in computing and Basic Data Types, Introductory C Language Topics, Program Design Implementation and Testing, and Complex data structures. • Reference book Section 1: Introductory C Language Topics, Section 2: Design Implementation and Testing and Section 3: Complex data structures. Testing. Unit and integration testing will be key to proving that the software produced is reliable. Please note: This is an individual assessment, so you should not work with any other student. Structure: This assessment will require at least 2 parts:- 1) The report as detailed below as a word document or pdf. Your report must include:- Title page, contents page, page numbers and declaration of ownership Section 1: Algorithm – The designed/Used algorithm expressed as a Jackson Structured Diagram Section 2: Technical Overview – A description of the all the variables, functions and data structures used needs to be included. Section 3: Testing Plan – A detailed testing plan table(s), mentioning if it follows Blackbox testing, Whitebox testing, record testing, etc. Section 4: Testing and Evaluation – Your Testing Plan table(s) including the obtained outputs through your code. Confirm if they match the expected outputs. Comment on any differences between the expected and the obtained outputs. Section 5: Summary – a short (no more than one page) and accurate reflection of how well you think the solution works and what improvements are needed or could be made as a future work. Section 6: References – All used resources including the module reference book. 2) A C file containing the complete source code. + any text files C Code with Comments. The complete working C program implementation of your algorithm, which has comments in the code describing what it does. Theory and/or task resources required for the assessment: You need a good understanding of numerical Fundamentals of Numeric Data, Data Types, Algorithms and Abstraction and Decomposition themes for this assessment. Referencing style.: Literary references should be in the Harvard style, and any resources used must be referenced in a bibliography. Expected word count: You should write no more than 1 page for each section. Note that this is a limit, not a target. There is no minimum word count, and you should be succinct in your writing and avoid repetition. Use diagrams, lists and tables wherever possible. The Reference and Appendix sections are not included in the word count. You may write additional appendices as needed. Learning Outcomes Assessed: - Programme in C and illustrate good coding practice through the design and verification of simple control programmes - Apply a range of programming techniques and functions in programming (e.g., arithmetic operators, standard output, precedence, local variables and Boolean expressions, global variable, parameters, return values, frames, strings and command line arguments) - Compile and debug software

$25.00 View

[SOLVED] 5CCE2SAS Signals and Systems COURSEWORK 2 MINI PROJECT

COURSEWORK 2: MINI PROJECT 5CCE2SAS: Signals and Systems Aim: After completing this coursework, students can record signals using microphones and be familiar with potential noise sources. Furthermore, they can apply different techniques learned during the module to analyse body signals. Project assignment 1.   Data: (1) A dataset containing swallowing sounds  (dataset_1) are provided on Keats for the analysis. The data includes five samples of swallowing water and five samples of swallowing food (biscuits). (2)  Each group shall also measure the sound produced during swallowing using their mobile devices. Each group should record five samples of swallowing sounds (dataset_2), either water or food (Pick one and mention it). If you choose food, then consider only food samples from dataset1 for comparison with your dataset2. And the same analogy applies if you decide to collect swallowing water. 2.   Design a data analysis approach (Remember to work on either swallowing water or food, not both) that shall include: a.   Illustration of time-domain signals b.   Quantification of the size and the signal-to-noise ratios of the signals and investigation of how downsampling the signals affects these two metrics c.   Illustration of frequency spectra. d.   Estimating the essential bandwidths of swallowing events and investigating how downsampling the signal affects the essential bandwidths. e.   Compare the above metrics between dataset_1 and dataset_2. f.    Graphs or Tables summarising the key findings (descriptive statistics) 3.  Write a report in a scientific article using IEEE conference paper  format   (template: https://www.ieee.org/conferences/publishing/templates.html) of a minimum of 4 and a maximum of 5 pages (including references), with the required format of the template. Excess pages will not count towards the grade. Balance text/figure space ratio correctly. The report should contain a.   a short introduction motivating the application of swallowing events. b.   a method section describing the data and collection (Protocol). Furthermore, this section should describe the data analysis approaches that permit the reproducibility of the results. c.   a result section summarising the key outcome using descriptive statistics d.   a discussion of your approach and results. e.  A Table with peer assessment of each member's contribution (in percentage). f.   A list of a minimum of four references (maximum of eight references) Deliverables (What to submit) 1.  Your data 2.  Your scientific report in PDF 3.  A collection of your codes (structured with comments) in .m or .py. Codes should run and produce all Figures submitted in the report. Submission is made by one member only on behalf of the entire group, but it is the whole group's responsibility to verify that the submission has been correctly completed. N.B: We will check for plagiarism, and the misconduct team will investigate similarity scores above 20%.

$25.00 View

[SOLVED] CSCI 4041 HW3 Part B

CSCI 4041 Part B - Priority Queue (Programming Problem) Submission: Submit the following file to “Homework 3B” submission on Gradescope. • priority queue .py Problem H3.6: Update Priority (20 points) Instructions: Many advanced algorithms require the update of a key in a priority queue.   For example, Dijkstra’s single source shortest path algorithm described in Section 22.3 of our textbook takes advantage of the MAX-HEAP-INCREASE-KEY described in Section 6.5.   The  update  of a key, however,  needs to be efficient to improve the runtime requirements.   For  this  problem,  we  will extend a heap to create a priority queue. Using our priority queue, we can update an item’s key in O(lg n) time. This is because we also ensure that each time we lookup an index by id, the algorithm runs in O(1) time. Task: You are given a set of tasks to add to a priority queue.  Each task has an id and a priority key. You are provided with the following base code: • H3 6.py - This file has example code for testing your implementation. (You will not need to modify this file) • binary heap.py - This file implements a binary heap. (You will not need to modify this file). • task.py - This file defines the task object. (You will not need to modify this file) Below are a description of the methods: ◦ get id() - Returns the id of a task. ◦ get name() - Returns the name of the task. ◦ get key() - Returns the key of a task, which represents a task’s priority.  The higher the key is, the higher the priority. ◦  set key(key) - Changes the priority of a task. • priority queue .py - This file implements a priority queue,  which inherits from a binary heap. (You will need to modify and submit this file.) Extend the heap functionality to create a priority queue.  Our priority queue expects that item.get id(), item.get key(), and item .set key() exist, therefore, allowing the use of the Task object. You will need to implement the following methods: ◦ extract max() - Take the top item off of the heap, and maintain the heap property.  The method should run in O(lg n) time. ◦  insert(item) - Add the item to the heap in the correct location based off the item’s priority (e.g.  item.get key()). The method should run in O(lg n) time. ◦ lookup by id(id) - Return the index of the item in the heap structure. If the id is not found, -1 is returned. The method should run in O(1) time. ◦ update priority(id,  key) - Looks up an item in the heap, set’s the key for that item, and maintains the heap property. The method should run in O(lg n) time. ◦ on index changed(item,  index) - This method may be helpful for monitoring when an item’s index changes during the heapify() method of the binary heap. Testing: on the following page... Testing: To test the program you can run H3 6.py, which creates a priority queue and calls the methods you implemented. You can test your program with the following command: python3   H3 _ 6 . py You should see the following output based on a hypothetical scenario involving CSCI 4041: Tasks : ------------------------------ Id : Name ( Priority ) ------------------------------ 1 : Study for midterm . ( 20 ) 2 : Email professor . ( 21 ) 3 : Start homework . ( 20 ) 4 : Study induction for midterm . ( 19 ) 5 : Start homework problem H3 .1. ( 18 ) 6 : Email professor and TAs . ( 25 ) 7 : Start homework problem H3 .6. ( 16 ) 8 : Finish homework problem H3 .1. ( 20 ) 9 : Post to Piazza for help on H3 .6. ( 15 ) 10 : Study Master Theorem for midterm . ( 14 ) 11 : Request homework extension . ( 42 ) 12 : Take a picture of my cat . ( 6 ) 13 : Attend discussion . ( 5 ) 14 : Feed the dog . ( 2 ) 15 : Live dangerously . ( 1 )

$25.00 View

[SOLVED] EEE60204 ROBOTICS DYNAMICS AND CONTROL

EEE60204 ROBOTICS, DYNAMICSAND CONTROL Group Assignment (30 %) DATE:  25 February 2025 Deliverables: 1)  Group Report 2)  Contributions table 3)  Video presentation (screen recording of the robot simulation and its movement) 4)  Source code in a zip file Name Student ID Work done Contribution percentage Signature                     Report Outline: Section A: Introduction 1.   Outline the objectives of utilizing ROS 2 MoveIt 2. 2.   Discuss the advantages and disadvantages of ROS 2 MoveIt 2. Section B: Task Planning 1.   Define the work envelope, specifying the range of X and Y coordinates. 2.   Determine the initial placement of objects. 3.   Establish the pattern for output placements. Section C: Coding •     Explain the logical flow of the code. •     Implement object addition. •     Perform grasping operations. •     Execute object placement. Section D: Analysis of Robotic Arm Path Planning 1.   Examine all possible movement paths. 2.   Justify the selection of the optimal path. Section E: Obstacle Avoidance (Optional) •     Assess path planning strategies when obstacles are introduced. Section F: Conclusion •     Summarize the work and propose future improvements.

$25.00 View

[SOLVED] EL1241 Analogue Electronics 2024/25

Academic Year: 2024/25 Assessment Introduction: Course: BEng (Hons) Electronic Engineering Module Code: EL1241  Module Title: Analogue Electronics  Title of the Brief: Frequency Response and Filter Design Type of assessment: Coursework This Assessment Pack consists of a detailed assignment brief, guidance on what you need to prepare, and information on how class sessions support your ability to complete successfully. You’ll also find information on this page to guide you on how, where, and when to submit. If you need additional support, please make a note of the services detailed in this document. How, when, and where to submit: The deadline for this assessment is 21st March 2025 at 23.59 via the submission zone found the EL1241 Blackboard area - Please note that this is the final time you can submit – not the time to submit! If your work is submitted via the Turnitin link on Blackboard, the link will be visible to you on: 12th December 2024 Feedback will be provided by: 26th April 2024 You should aim to submit your assessment in advance of the deadline. Note: If you have any valid mitigating circumstances that mean you cannot meet an assessment submission deadline and you wish to request an extension, you will need to apply online, via MyUCLan with your evidence prior to the deadline. Further information on Mitigating Circumstances via this link. We wish you all success in completing your assessment. Read this guidance carefully, and any questions, please discuss with your Module Leader or module team. Additional Support available: All links are available through the online Student Hub 1. Academic support for this assessment will be provided by contacting Zhifeng Ma or Juan Du 2. Our Library resources link can be found in the library area of the Student Hub or via your subject librarian at [email protected]. 3. Support with your academic skills development (academic writing, critical thinking and referencing) is available through WISER on the Study Skills section of the Student Hub. 4. For help with Turnitin, see Blackboard and Turnitin Support on the Student Hub 5. If you have a disability, specific learning difficulty, long-term health or mental health condition, and not yet advised us, or would like to review your support, Inclusive Support can assist with reasonable adjustments and support. To find out more, you can visit the Inclusive Support page of the Student Hub. 6. For mental health and wellbeing support, please complete our online referral form, or email [email protected]. You can also call 01772 893020, attend a drop-in, or visit our UCLan Wellbeing Service  Student Hub pages for more information. 7. For any other support query, please contact Student Support via [email protected]. 8. For consideration of Academic Integrity, please refer to detailed guidelines in our policy document . All assessed work should be genuinely your own work, and all resources fully cited.   9. For this assignment, you are not permitted to use any category of AI tools. Preparing for your assignment. Ensure that you fully understand the requirements for the assessment and what you are expected to complete. The assignment will be introduced in the lecture session where you can ask any questions, you can also ask for clarification by contacting the module team. The following module learning outcomes will be assessed in this assignment: · Demonstrate an understanding and application of basic electrical and electronic principles. · Describe the theory of operation and principal characteristics of simple analogue electronic devices and circuits. · Relate the results of experiments on simple analogue electronic circuits to theory Please read over the guide to writing a technical document https://www.theiet.org/media/5182/technical-report-writing.pdf and ensure that you fully understand the requirements of the assessment. There will be a lecture session on the assignment and writing a technical document. Ensure that you research and read into the subject area before writing the report so that you have a good background understanding to the subject area. Assignment Brief A low-pass filter is constructed, consisting of a 10 nF capacitor connected in series with a 16 kW resistor as shown in Figure 1.  An a.c. voltage (input voltage) is applied to this series combination, and the output voltage is measured across the capacitor. The task involves a. Calculate the cut-off frequency fc of the filter, in Hz. b. On the log-linear graph paper provided in Page 5, sketch a straight-line approximation (consisting of two separate straight lines) of the magnitude response of the filter, showing the gain of the circuit in dB against frequency in Hz on a logarithmic scale. c. Using the straight-line approximation of the magnitude response of the filter as a guide, sketch a curve representing a better approximation of the filter’s magnitude response.  (HINT: consider the actual gain of the filter at the cut-off frequency). d. Estimate the gain, in dB, of the filter at the following frequencies: e. f = 100 Hz, f = 1 kHz , f = 10 kHz, f = 20 kHz, f = 100kHz f. Construct the circuit in a Breadboard and vary the frequency of Vin between 10 Hz and 1 MHz as detailed in Table 3, and then plot a graph of output in dB against frequency (plotted on logarithmic scale graph paper). Vin will come from a signal generator set at 10V peak to peak Lay out your results table like this – note the frequencies which have already been chosen as in Table 3. g. Compare and comment on the calculated features of the filter (i.e. cut-off frequency, magnitude response, gain in various frequency) and the actual measurement. h. Modify the circuit in Figure 1 for a CR circuit by swapping the position of resistor R1 and capacitor C1. Calculate the values of resistor R1 and capacitor C1 using allocated the cut-off frequency in Table 4. The values of resistor and capacitor needs to be selected from the list of preferred values E48. Calculate and comment on the tolerance of CR filter based on the preferred value E48. i. Repeat tasks b-d for the designed CR filter circuit. j. Instead of constructing the CR filter circuit in the Breadboard, simulate your design of the circuit in Proteus EDA software and create its frequency response. k. Compare and comment on the calculated features of the filter (i.e. cut-off frequency, magnitude response, gain in various frequency) and the simulation result.                               Figure 1 RC filter circuit Word limit: A maximum of 1000 words (see notes below for further information). Technical Report Writing To complete the report, you will have to thoroughly research the area using reliable sources and precisely reference where your information and statements are from. The aim of the report is to be clear, concise and convey technical information to the reader, note that the reader is familiar and experienced in the area. Ensure that you write your report for this audience. A guide on writing a technical document can be found at the following (this will also be uploaded to blackboard): https://www.theiet.org/media/5182/technical-report-writing.pdf Please read over the above document to ensure that you are clear on what a technical report is and know what you are required to complete, note the above is a guide not an explicit standard you will be required to ensure that your technical report contains the relevant information presented correctly for the reader. Ensure that you research and read into the subject area before writing the report so that you have a good background understanding to the subject area. You will need to provide a short report, which shows the calculation of each tasks in Marking Criteria and Weighting section below with an appropriate assumption, description and comments, no longer than 1,000 words. You should use the guideline below to structure your report. For the final reporting submission, make sure that each page is marked with the date of completion, the page number, and the total number of pages submitted.  Make sure that the front page of your submission has this information displayed prominently along with the module name and number and assignment title. Your work must be referenced using Harvard Referencing system available here: https://v3.pebblepad.co.uk/v3portfolio/uclan/Asset/View/Gm3mmGk6sM3RgHZnjGfh7mm6pM. Further information to support your development will be available to view on assignment briefing session and Blackboard.  Notes on Wordcount and Referencing For good marks and given the limited wordcount you should produce work that is: accurate; thorough; well-argued; clear; accurately referenced; relevant and written in correct (UK) English grammar and spelling. You may include figures and tables with short captions (25 words each) and a list of references without affecting the overall word count. Remember that you have limited words so ensure that you “stick to the point” and do not get into detail on superficial elements. Ensure that you include references when discussing technical facts and statements on the technology used. You must reference all your sources of information. These should be cited in the appropriate part of the report and fully identified to meet the Harvard referencing standard in a list at the end. Website articles must be properly referenced to be considered as legitimate references. Presentation of assignment work Except where specifically stated in the assignment brief, assignment work submissions should be word-processed, in Microsoft Word 2016 format, with a footer comprising: your module code; date; page number. The following module learning outcomes will be assessed in this assignment: · Demonstrate an understanding and application of basic electrical and electronic principles. · Describe the theory of operation and principal characteristics of simple analogue electronic devices and circuits. · Relate the results of experiments on simple analogue electronic circuits to theory  Marking Criteria and Weighting Your submission will be marked in accordance with the following marking scheme: Item Weight (%) Model answer criteria 1. Derivation 30 Concise and accurate design decisions based appropriate assumption. 2. Plot and graphs 20 Clear, neat and correctly drawn graphs for the tasks in Section 5 3. Implementation 40 Evidence of implementing both RC and CR circuits in breadboard and the relevant test results 4. Presentation 10 Presentation requirements met in full. Concise, complete and well-structured documentation with correct use of English throughout.  Neat diagrams, clearly presented.  Contents page and page numbers. Total 100    

$25.00 View

[SOLVED] COMP3004/COMP4105 Designing Intelligent Agents Coursework Spring 2025

COMP3004/COMP4105 Designing Intelligent Agents Coursework Spring 2025 Overview The coursework for this module is based around (1) designing intelligent autonomous agents and an environment with which they interact, (2) setting those agents a task, (3) asking one or more questions about that task, and (4) evaluating it using experimental methods. You will then present the results from this in a report, video or podcast, which will also explain the context for the work. Students doing COMP4105 will in addition do a short presentation explaining how their coursework relates to contemporary research and technology. Details An autonomous intelligent agent is a program that operates in a particular environment, perceives aspects of that environment, and then uses its intelligence to choose actions that change that environment, to carry out some task. Typically, these actions are a mixture of responses (either immediate or deliberated) to its perception and memory, and proactive actions such as exploration. Your task for this coursework is to design an agent-based system containing the following four aspects: An Environment. This is the (virtual) place where the agents will operate. It could be one of: · A simulation of a physical environment in which mobile robotic agents move. This could be the simulation used in the classes earlier in the semester (perhaps extended), a robot environment such as The Player Project (http://playerstage.sourceforge.net), or a project in Unity or a similar game environment if you are familiar with one from elsewhere. · A language environment, for example where humans are interacting using written or spoken language with one or more intelligent agents, · The Bristol Stock Exchange system (https://github.com/davecliff/BristolStockExchange) or a similar simulation of some aspect of the economy or society · A game environment such as Ms. PacMan (https://gym.openai.com/envs/MsPacman-v0/), the Open Racing Car Simulator (http://torcs.sourceforge.net), RoboCup (https://www.robocup.org/leagues/23) or similar (see e.g. http://www.gvgai.net) · One of the more complex task environments from the OpenAI Gym (https://gym.openai.com) There is no need to develop the environment yourself—the focus of the project will be on the agents in the environment (robots, trading agents, game-playing agents, autonomous drivers, language agents, etc.) – but it is likely that you will set up the details of the environment to address your specific question. You are allowed to use the code from the classes, but please try to make it clear broadly which parts of the code are taken from the class examples, and which is your own work (we appreciate that this is sometimes complicated to do at a line-by-line level, but you should indicate this in broad terms). Autonomous Agents. You should introduce one or more autonomous intelligent agents into the environment, which use some kind of AI to solve a task. · Examples of AI could be an AI planning system such as Goal Oriented Action Planning (http://alumni.media.mit.edu/~jorkin/goap.html), a search algorithm such as A* search, a genetic or swarm search, a reinforcement learning algorithm, fuzzy logic, or a hard-coded reactive or state-machine AI. · The task will be one relevant to the environment: e.g. a robot vacuum cleaner clearing up dirt, a poet trying to write verse about the day’s news, a trader trying to optimise its returns, a game player trying to get a high score in a game, etc. Within reason, you can use any language to do this. If you are planning to use anything other than Python, Java, C/C++, MATLAB/Octave, R, JavaScript, and mainstream web technologies such as HTML/CSS/JS, then please mention this in your topic approval. A Question. You should ask a specific question (or a set of related questions) about your system. For example: · How do different approaches (a genetic algorithm, an A* search algorithm, a hard-coded heuristic) compare in terms of task performance? · How does the performance of the system change as we vary the number of agents in it? · If the system is trained on one version of the environment, does that learning transfer over to a new version of the environment · How do different kinds of communication/coordination between agents effect the efficiency of those agents on the task? · How much improvement does storing some information (e.g. a map of the environment) make compared to carrying out the task in a purely reactive way? · How do different kinds of sensing/perception systems affect the capacity of the agent to carry out its task? · How sensitive is the agent to error/noise? A Set of Experiments. You should answer your question by carrying out a set of experiments. Remember the structure that we talked about in one of the lectures: · implement code that carries out a run of the agent’s behaviour and measures performance · then, run that code multiple times to get a measure of average performance · then, repeat that process for the different conditions in your question, and use descriptive statistics, charts/visualisation, and/or inferential statistics (e.g. significance tests) to test your question Then, you should discuss the question using these experimental results as your evidence. If this evaluation involves asking people to interact with your code, then please read the notes on research ethics that are in the Coursework section of the module Moodle page. Constraints. Your project is expected to demonstrate around 90 hours of effort. So, you should not just run a basic reinforcement learning or convolutional neural networks library/tutorial on a simple environment from AI Gym or similar, though you can use these as part of your work. You are not allowed to develop a basic chatbot i.e. a single system that interacts with a human using intent matching, identity management, transaction processing, question answering etc. If you want to do something on language agents, you should do something that goes beyond this (e.g. a creative language system, a system that involves interactions between multiple language agents, or a system that involves a language-based interface to another system). You are at liberty to use any readily available libraries, frameworks, APIs, services, datasets etc. as you want—but your work must demonstrate substantial work in using, combining and building on what you use. You are not permitted to simply re-submit coursework from another module, but you could potentially build on code that you have written previously and build on it in a new way. If your work extends or re-uses coursework from other modules, it must make a substantial and distinctive contribution beyond what was in those courseworks, and you must clearly identify what the new contribution is. Examples Here are a few examples of things that you could do. You don’t have to do one of these—indeed, we would prefer you to come up with your own idea—but, these would all be acceptable project ideas if you want to do them: · To take the “robot vacuum cleaner” from the early classes, and experiment with different numbers of robots, and different coordination strategies (e.g. robots try to stay a fixed distance from each other, compared to sharing a map that they build up) · Explore how multiple language agents, each of which has a particular set of knowledge or perspective can, discuss/argue with each other to explore a topic. · Take a number of different trading strategies and run them in the Bristol Stock Exchange system with varying amounts of noise/uncertainty, to see how robust each strategy is. · Develop a game-playing agent or a control agent for an NPC or opponent in a game, and explore different strategies for playing (or learning to play) a game. · Take the “avoid the cats” problem from the class, and compare a number of strategies for the problem: warning the cats vs. moving out of the way, and learning when to act based on a simple statistical approach vs. a decision-tree approach. · Consider the problem of planning a robot’s movement around a mapped environment (e.g. the map generated from WiFi triangulation introduced in one of the classes). Contrast A* search and genetic algorithms on this problem, and compare them both against random wandering. · Explore a swarm simulation such as the Boids flocking algorithm explored in one of the classes, and map out how the overall behaviour of the birds relates to the parameters in the algorithm. Topic Approval You should submit a short description of your project idea (a couple of paragraphs, 100-200 words) on the Moodle page by 15:00 on 26th March 2024. We will then give you feedback on whether the project is an acceptable one, and how it might be modified or improved if it is not acceptable. If you submit before this date, we will endeavour to give you early feedback. Submission By 3pm on 13th May 2025 you should submit the following via Moodle. This deadline may be extended if you have a support plan or extenuating circumstances. Late submissions will incur a penalty of 5% per working day, in line with the standard University late policy. COMP3004 Your submission should contain: A report, between 2500-4000 words, or a 20-minute video, or a 20-minute audio podcast, where you describe: · The core ideas of your project; clearly state the question that you are trying to answer · A review of relevant ideas, technologies and research papers · How you designed the environment and agents in order to address that question · Technologies used, and challenges that you met in doing the implementation · How you set up and ran your experiments · The results from your experiments · A discussion of the question in light of the experimental results · A conclusion, where you summarise the work, reflect on its successes and limitations, and briefly mention some ideas for how you would take the work forward if you had more time The target audience of this is students in your year on your degree—so, there is no need to explain basic computer science ideas, but you should not assume a deep knowledge of your particular topic. A copy of your code, either as an upload or a link to a repository Anything else that you think would be helpful for the markers, e.g. sample outputs from your system, a link to a brief video demonstrating it working, reports from user studies/interviews etc. For students on COMP3004, the mark for this portfolio of work will count as 100% of the module mark—there is no exam. COMP4105 In addition to the portfolio of work described for COMP3004, students on COMP4105 should also do a 10-minute presentation about your work (dates/times will be arranged; this is likely to be between 14th-16th May 2024). This should give an overview of your project and explain how it is informed by research ideas from AI and intelligent agents. For students on COMP4105, the portfolio of work (including the report/video/podcast) will count for 90% of the marks on the module and the presentation for 10%—there is no exam. How the Work will be Marked Marking will take into account: · The intrinsic complexity of the overall project · background research and how you have used it to contextualise your work · the choice of task environment and how you have used it/adapted it for your specific project · the effective use of artificial intelligence and agent-based systems ideas from the course and your wider studies in designing your autonomous agents · how clear your question(s) are, how well the experiments have been designed to answer them, and your level of rigour in planning and analysing the experiments · how well the report answers the question by using the evidence from the experiments · the overall clarity and structure of the report, appropriate use of scientific and technical English, and the quality of charts, diagrams, pseudocode where relevant · the quality of reflection on the successes and limitations of the work · (for students doing a presentation) the structure of the presentation, the clarity of explanations, and good use of slides or other visual aids Marking Scheme—Main Project Each of the following descriptors gives a broad idea of the achievement expected for a mark in that range. Clearly, individual projects may fall short in some areas and show excellence in others. The marking should also be adjusted to reflect the intrinsic difficulty of the project. Band Guidelines 90-100 Marks in this range are reserved for a superb all-round performance. Work done in all aspects of the work go beyond even high expectations. The student has shown a thorough understanding of the problem. All tasks, including very challenging aspects and extensive stretch goals, have been successfully completed. The project shows depth and engagement with research ideas, and everything has been completed to a high standard. The report could form. the basis of a publishable conference/workshop paper. 80-89 Excellent contributions to all areas of the project. Exceeded expectations in some areas by carrying out implementations and experiments of a high level of complexity or sophistication. Demonstrates knowledge and understanding of the ideas that goes beyond the material covered in the module. Clear appreciation of the project as a whole, its adequacies, limitations and possibilities for future development. The project demonstrates insight and depth beyond that usually expected in undergraduate/master’s work. Report presented very clearly and to a good professional standard. 70-79 Very good contributions to all areas of the project. Successful completion of the project tasks including some more challenging aspects and stretch goals. Demonstrated initiative and creative problem-solving ability. Able to undertake the work in a competent and independent manner. Able to reflect accurately on adequacy and limitations of the project’s achievements. Report presented very clearly and to a good professional standard. 60-69 Good appreciation of background. A good attempt at applying this to the task, with demonstrated ability to cope with difficulties. Good technical skills in several areas. Whilst most of the core aims of the project have been achieved, it might come a little short in some areas. Good reflective understanding of the project. Well organised and structured report, with perhaps a few parts unclear. 50-59 Postgraduate pass level. Satisfactory background reading and a competent attempt at their tasks. Reasonable technical competence demonstrated. The core parts of the agents, environment and experiments have been completed satisfactorily, but little achieved beyond that. Able to reflect satisfactorily on the project. Or, a lower-difficulty project carried out well. Decently written and structured report, but with perhaps some aspects unclear, unbalanced or underdeveloped. 40-49 Undergraduate pass level. Competent background reading and appreciation of the project area. Basic technological competence. Some areas of the core tasks may be incomplete, but a decent attempt has been made at them. Able to reflect in a limited way on the project. Report gives an overview of the project and is readable, but perhaps lacks detail, clarity or focus in some areas and/or is too informal or unbalanced. 30-39 Unsatisfactory. Some attempt has been made at the background reading but clearly only partial understanding of project topic. Incomplete attempt at the core tasks. Weak technical competence. Little ability to reflect adequately on the project. Report fails to cover some aspects of the work. 20-29 Inadequate background reading, but shows some limited understanding of how ideas can be linked to the task. Minimal attempt at the core tasks, showing poor understanding. A substantial amount of work is still needed to achieve the core tasks. Minimal reflection on the project. Report is disorganised, overly-short and contains areas that are very unclear and/or demonstrates misunderstandings. 10-19 Minimal attempt at background reading, inappropriate use of material, almost no attempt at core tasks. Very poor understanding of the problem. Minimal or no reflection on the project. Minimal report, or report shows substantial misunderstandings. 0-9 No or almost no significant attempt. Marking Scheme—Presentation (COMP4105 only) Each of the following descriptors gives a broad idea of the achievement expected for a mark in that range. Clearly, individual projects may fall short in some areas and show excellence in others. The marking should also be adjusted to reflect the intrinsic difficulty of the project. Band Guidelines 9-10 A professional-level presentation of exceptional clarity and very clear structure, with very high-quality slides or other visual aids that flow seamlessly with the spoken presentation. Shows a thorough and up-to-date understanding of research and technologies in the area and how they relate to the work. 7-8 A clearly structured presentation, which explains all aspects of the project well, and which has high-quality slides or other visual aids that are tied in strongly with the spoken presentation. Well-grounded in an understanding of research and technology in the area, including contemporary developments. 5-6 Postgraduate pass level. A competent presentation that has a decent structure, gives a competent description of most aspects of the project, and where the slides and other visual aids are largely clear and related to the spoken presentation. Shows some awareness of how the project connects to research and technology in the area. 3-4 A presentation that has some level of organisation but where the topics are not presented in a clear order or where the presentation jumps from topic-to-topic, some explanations not clear, visual aids provided but not very clear and/or not very related to the spoken presentation. Shows lack of awareness of the wider research/technology context. 1-2 A presentation that mentions some aspects of the project work but is largely disorganised to the point where it cannot be followed and where most explanations are unclear, and where visual aids are unclear and/or not related to the spoken presentation. Minimal attempts to connect the work to research/technology. 0 No significant attempt at presentation      

$25.00 View

[SOLVED] ALY6070 Shiny Application

ALY6070:  R Shiny Application Purpose of Assignment (WHY) Communicating complex data information and insights through storytelling with data visualizations using dashboards, scorecards, spatial data representations and use of annotations is an important aspect of data science. In various roles you must be able to evaluate, propose and implement appropriate visualizations for a specified audience using key informational design concepts.  In this assignment you will create a RShiny group dashboard.    Program Learning Outcomes PLO7: Design and deliver presentations, reports, and recommendations that effectively translate technical results/data solutions and are coherent and persuasive to different audiences. Course Learning Outcomes   This assignment is directly linked to the following key learning outcomes from the course syllabus: ● CLO1: Design dashboards that “tell a story” using a narrative flow ● CLO2: Use graphic design concepts that enhance accessibility and aesthetics ● CLLO3: Create effective data visualizations by understanding the context, choosing an appropriate visual, and eliminating clutter ● CO4: Create and present data visualizations that focus the attention of the stakeholder on key data insights ● CLO5: Using ethical strategies, identify and create visualizations that are not biased or misleading to the audience. Assignment Description (WHAT) R Shiny Application Based on your initial analysis, please develop visualizations that effectively communicate your story using data.  To do this, you should create visualizations that reflect the principles discussed in the course (e..g, storytelling, choosing effective visuals, gestalt principles, design principles, etc).  Next, you will review the dashboards created by each member of your group and select the best visualizations (as a group).  Finally, you will then incorporate these visualizations into a R Shiny dashboard. The dashboard should: ● Include as many different visualizations as needed. These should be of varying types ● Answer the research/business question and display the key information that the intended audience needs ● Be easy to navigate and visually appealing ● Reflect the data accurately and generally communicate the data appropriately ● Tells a story PLEASE NOTE: the group needs to submit at least three RShiny dashboards (one for each member of the group) AND the code used to create the dashboards. Rubric RShiny Group Dashboard    Criteria Above Standards Meets Standards Approaching Standards Below Standards Data Preparation 20%   The data set was cleaned, formatted, and prepared for analysis in an exemplary way.   The data set was cleaned, formatted, and prepared for analysis in a satisfactory way.   Flaws are present in the way the data set was cleaned, formatted, and prepared for analysis.   Substantial flaws are present in the way the data set was cleaned, formatted, and prepared for analysis, making the data set useless for analysis   Concept Clarity 30%      Initial concepts were original, relevant, and clear. The analysis was developed beyond expectations throughout the design process. There is strength and relevancy in the initial concepts, most of which were visible throughout the iteration process The project presents a recognizable concept, but the presented analysis could have been taken much further in terms of depth, argument and clarity. The project lacks conceptual maturity - it does not have a clear point or contains flawed or contradictory arguments. Innovative Element 25%   The chosen visualization type presents an innovative solution beyond being appropriate for the selected data structure. Selection of visualization type is appropriate for the selected data structure. There are other visualization types that are better suited for the selected data structure. Selection of visualization type is not appropriate for the selected data structure. Visualization Design 25%   The visualization is not only legible due to appropriate choices of colors, placement of visual forms, labeling and annotation, but demonstrates a high level of design competency. The visualization is legible due to appropriate choices of colors, placement of visual forms, labeling and annotation. Some good choices of colors, arrangement of visual forms, labeling and annotation, but inconsistencies and ambiguities remain. Visualization is confusing due to inappropriate and inconsistent use of colors, placement of visual forms, or missing or confusing labeling and annotation.  

$25.00 View

[SOLVED] 158326 Software Construction Tutorials 3

158.326  Software Construction Tutorials 3 Create a class diagram based on the following case scenario. Next, implement the class diagram by creating C# project of Windows Forms App using Microsoft Visual Studio 2022. Alternately, you can also choose to create an ASP.NET Core Empty project and apply MVC architectural pattern to your implementation - this is optional CASE There are many different types of testing stations nationwide for conducting vehicle inspections. Each testing station has a registered name, an address and a contact telephone number. Some of the testing stations which conduct car and truck inspections are VTNZ, DriveSafe, NZ- WoF, and AA. You have to design an application for car inspections done by VTNZ testing station. VTNZ offers many services for inspections. Different service types and prices apply for car and truck inspections. For car inspections, the service type and service price are given in Table 1. Table 1 Service Type Service Price WoF inspection $ 50.00 Modified vehicle check-up $ 200.00 Pre-purchase inspection $ 150.00 Certificate of Fitness $ 210.00 Part A: Name each class 1.  __________________ 2.  __________________ 3.  __________________ 4.  __________________ How many abstract classes do you have? _____________________ How many are interfaces do you have?  ______________________ How many concrete classes?   ______________________ Draw the class diagram, showing the relationship between them. Hint:  The class diagram is similar to the example done in class. You should use all OO concepts covered in class (i.e., inheritance, wrapper/ interface and association) for a flexible design which  can be extended. NOTE:  Use proper naming conventions e.g. use camelCase, prefix private fields with _ and prefix protected fields with z. Part B: Create a new project in Visual Studio and implement the design using the class diagram identified in Part A. Further, note that VTNZ wants to keep track of the total number of inspections carried out and the total price for them. To illustrate this screen shots on the following pages show you (1) the basic form design, (2) what will be displayed after the FormLoad event, (3) what will be displayed after one inspection is requested and (4) what will be displayed after two inspections are requested. Hints:  The implementation will be similar to the example demonstrated in class. You will have to use shared/static fields/properties/methods to show summary information - ‘Total Number of Inspections Requested’ and ‘Total Price for all Inspections Requested’ .

$25.00 View

[SOLVED] MTH1003 Mathematical Modelling CW3

MTH1003 Mathematical Modelling CW3: Group project on population dynamics: Model of competition between species Submission deadline: 12:00 noon on Thursday 20 March 2025 (week 10). In this project you will use mathematics and numerical methods coded in Python to study the dynamics of a Lotka–Volterra model of competition between species. You will work in groups of five or six people, and each group will write a report, using LATEX, to be submitted in week 10. The report should describe the methods used and clearly illustrate the results using plots created in Python. This assignment is AI-prohibited. 1 Conservation Consider the populations of two species of squirrel – let’s call them ‘reds’ and ‘greys’ – who inhabit the same ecosystem and compete for resources. Their respective populations are r(t) and g(t), in suitably scaled units. To begin with, suppose the squirrel populations obey the Lotka–Volterra model Find and classify the equilibrium points of this system. Show that the quantity is conserved for this system. 2 Numerical simulation and testing Use a foward Euler time integration scheme to obtain the solution of (1) for t ∈ [0, 6] with initial condition (r(0), g(0)) = (0.5, 0.116). To start with, use a time step ∆t = 0.1. By substituting your numerical solution for r(t), g(t) into the expression for C(x, y), compute the quantity C(t) = C(r(t), g(t)) for this numerical solution. The quantity C(t) is constant mathematically, and you should discuss how well C is conserved in your numerical solution. Repeat the calculation for smaller values of ∆t. What value of ∆t (let’s call it ∆t0) is needed to ensure that the relative change in C, i.e. |(C(t) − C(0))/C(0)|, is less than 0.001 at the final time t = 6? For the two cases ∆t = 0.1 and ∆t = ∆t0, plot the trajectories (r(t), g(t)) in the (r, g)-plane. Also indicate on the plot the positions of the equilibrium points. Briefly discuss the implications of your results for numerically modelling the system (1). Use ∆t ≤ ∆t0 for any numerical integration in the questions below. 3 Stable and unstable manifolds One of the equilibrium points you found in section 1 should be a saddle. By using linear stability theory, determine the directions of the stable and unstable manifolds close to the saddle. Use your forward Euler time stepping code to compute and plot an estimate of the unstable manifold by choosing suitable initial conditions close to the saddle. Discuss briefly whether this estimate agrees with the linear stability analysis. By modifying your time stepping code to step backwards in time, or otherwise, compute and plot an estimate of the stable manifold of the saddle. Again, discuss briefly whether this estimate agrees with the linear stability analysis. 4 Nullclines and equilibria Now consider the extended Lotka–Volterra model with a = 2 and b = 3. Plot the nullclines of this system, and hence find the equilibrium points. Also classify the equilibrium points. 5 Ecosystem management Consider again the system (2). To begin with, a = 2 and b = 3, as in section 4. However, as a consequence of environmental changes, the parameter b is slowly decreasing. Discuss how the dynamics of the system changes as a result, and determine the critical value of the parameter b at which a stable equilibrium of co-existing reds and greys ceases to exist. Ecosystem managers are able to alter conditions so as to change the value of the parameter a. How should they change a in order to offset the effects of the decrease in b, and what value of a should they choose in order that the system should have a stable equilibrium with roughly equal populations of reds and greys? What would happen if the ecosystem managers changed a by too much? Is there a point, as b decreases, beyond which the ecosystem managers are unable to save the ecosystem? What happens in that case? Illustrate these various scenarios using suitable diagrams. 6 Red and grey squirrels in the real world Do some background reading around the competition between red and grey squirrels in the real world. As a starting point, there are some excellent web sites aimed at the wider public [2,3]. For a deeper, more technical discussion you could look at [4,5,6], amongst others. Discuss briefly some of the factors that are thought to be important in the real world but have been omitted or simplified in the model (2) introduced above. Suggest ways in which the model (2) could be made more realistic for modelling the real world population dynamics; be specific about what mathematical terms one might include in the model and what they would represent, or you may even consider incorporating further differential equations. Reports Write up your project as a report using LATEX (no other medium is allowed). You will probably find the Overleaf online system a useful way to cooperate on the project report. A template is provided on ELE: you don’t have to use this, but you may find it helpful (for example it makes better use of space on the page than the LATEX default). You can drag the zipped template file on ELE into Overleaf and then you will have something to look at and potentially work with. You must submit your report as a pdf file. Reports should be a maximum of 10 sides of A4 – this includes everything except an appendix of Python scripts, see below – with a font size no smaller than 11pt. Methods and results should be clearly and concisely explained, with appropriate equations and figures (and perhaps tables). All figures and tables should have a caption, and should be referred to (by their number) in the main text; LATEX’s built-in cross-referencing capability is very useful for this. You should submit key Python scripts (i.e. if several scripts are minor variations of each other then there is no need to include more than one) in an appendix to your report; the appendix falls outside the 10-page limit. The template gives an example of a convenient way to include a Python script. in your report. Scripts should include a reasonable number of clear comments to aid understanding of how they work. Your report should reference any sources you use in some standard style; for example see [1] below. All figures and plots presented should be your own unless they are clearly flagged as taken from a reference, for example by saying “ . . ., taken from [1]” in the caption. Any direct quotes from sources should be in quotation marks and clearly referenced. You must not take Python or other code directly from other sources for the topics above, though you can of course study any such code and use it to inspire how you write your own. If you do find code from other sources helpful, then you should give a reference to those sources. You can however take any Python scripts or functions from the MTH1003 lecture notes and ELE page, and use and modify these without attribution. Assessment and marking criteria Marks will be awarded as follows: • 90% Quality of report: including progress on the problem brief, mathematical accuracy, quality of discussion demonstrating good understanding, background information, quality of figures, overall layout, attractiveness and readability. • 10% Evidence on the ELE Wiki showing the functioning of the group, including minuting key meetings, uploading and commenting on draft material, and assigning tasks, setting deadlines, and supporting each other within the group. An overall mark will be given to each group based on the above breakdown. We will also require each student to give information on the contribution of other students within the group, and this will be used to determine individual marks from the group mark, which may be higher or lower than the group mark. The group ELE Wiki may be used to check on the contribution of students to the group effort. Any student who does not participate in their group, or shows very limited engagement, will gain a correspondingly low mark. Group working We will create new groups for CW3 that are distinct from those for CW2. The advice about working in groups for CW2 applies equally to CW3. So do divide up the tasks appropriately, and keep in touch with your group members and meet regularly to discuss progress. The advice from CW2 is reproduced here (with minor changes): This module and its assessment are not just about learning new mathematics or applying it in new contexts; they are also about working together in a professional manner, managing and coordinating work within a group of five or six people from diverse backgrounds and abilities, and ensuring a fair division of work between you. For the group work in this project you will need to jointly contribute to preparation of a report. This should involve you doing some or all of the following: • working together in a group on aspects of the problem, • doing mathematical calculations, • implementing numerical methods in Python, • explaining what you have done clearly and concisely, • helping others with support and constructive criticism (but do be tactful), • drafting and/or editing sections, and preparing diagrams for the report. We will provide each group with a Wiki on ELE which should be used to document the functioning of the group, and the contribution of individual students. You should use this medium to record notes of key meetings and who was present, tasks with deadlines as assigned within the group, the uploading of draft material (text, figures, codes, reports), and the commenting on and editing of draft material. The use of the Wiki is part of how we will assess your contribution to the project; see below. Dividing up tasks There are various ways of dividing up the work on this project among group members. There are interdependencies between the different parts, so those working on different parts will need to communicate and coordinate. In particular the early parts are easier than the later parts, particular part 5 on ecosystem management, and so we suggest that for the mathematical side you organise to share out the earlier parts amongst the group members first, and then when these are complete you share out the later parts, Another way is to think about the different types of tasks involved and to try to play to the strengths of the different group members: Who might be a good leader/coordinator/organiser? Who might be good at mathematical derivations? Who might be a good coder? Who might be good with LATEX? Who might be good at clear and concise communication and report layout? We recommend you work in pairs (or threes) on different aspects of the project. Either way, it will be important to bring all of the pieces of the project together at the end into a coherent piece of work. Although you’ll need to divide up tasks, it is every group member’s responsibility that the resulting report is as good as possible in terms of addressing the science and computing tasks and explaining the results to the reader in a manner that is attractive, clear, and ‘flows’. Working as a group All members of your group (including you) depend upon each other to engage in the required task and make a full contribution. So do what you say you will do to the best of your ability, and if you are ill or can’t attend a group meeting for any reason, then let members of your group know as soon as possible and arrange with them how to rectify the situation. It is a good idea to exchange email addresses and mobile numbers and/or use a social media group facility. Don’t expect your group to countenance clear lack of effort or unreliability on your part. To encourage all group members to make a full contribution to the project, the mark awarded will depend in part on how your fellow group members rate your contribution. We will explain the mechanics of this rating system towards the end of the project. Submission Your report should be submitted as a pdf file via ELE. Only one report per group should be submitted, but please include on the front page of the report the student IDs of all the group members who contributed. The deadline is 12:00 noon on Thursday 20th March 2025 (week 10). Bibliography [1] University of Exeter. LibGuides: An introduction to referencing. Retrieved January 20, 2022 from https://libguides.exeter.ac.uk/referencing/. [2] Woodland Trust: Red Squirrel Facts. Retrieved November 25, 2022 from https://www.woodlandtrust.org.uk/blog/2018/11/red-squirrel-facts/ [3] Wildlife Trusts: Red squrrels. Retrieved November 25, 2022 from https://www.wildlifetrusts.org/saving-species/red-squirrels [4] Gurnell, J., Wauters, L.A., Lurz, P.W.W., Tosi, G., 2004. Alien species and interspecific competition: Effects of introduced eastern grey squirrels on red squirrel population dynamics. J. Animal Ecology, 73, 26-35. [5] Roberts, M.G., Heesterbeek, J.A.P., 2001. Infection dynamics in ecosystems: on the interac tion between red and grey squirrels, pox virus, pine martens and trees. J. Roy. Soc. Interface, 18, 20210551. https://doi.org/10.1098/rsif.2021.0551 [6] Twining, J.P., Lawton, C., White, A., Sheehy, E., Hobson, K., Montgomery, W.I., Lambin, X., 2022. Restoring vertibrate predator populations can provide landscape-scale biological control of established invasive vertebrates: Insights from pine marten recovery in Europe. Global Change Biology, 28, 5368–5384. DOI:10.1111/gcb.16236

$25.00 View

[SOLVED] Line Detection Using Hough TransformMatlab

Line Detection Using Hough Transform 1. Introduction Line detection is extremely crucial for numerous real-world applications, including driving systems, navigation robots, and image processing operations [1,2]. The Hough Transform. is a highly efficient computer vision algorithm that can be used especially here. It offers the ability to detect straight lines in images. The Hough Transform. works by converting points of the original image into a new parameter space. In this new area, straight lines correspond to plain recognizable patterns that make it simpler to identify line parameters even in noisy images. We discuss two specific tasks of line detection based on the Hough Transform. in this report [4,5]. The first problem utilizes MATLAB's inbuilt functions [3] to carry out edge detection and line detection from an image. After extracting the lines, we study their nature, that is, verifying whether the lines are parallel or not. The second problem involves performing the same line detection but carries out a thorough analysis using special methods and comparing the outcome. Our objective is to explicitly demonstrate how the Hough Transform. can be efficiently used, both through regular MATLAB operations and custom procedures, and thus understand its actual-world performance and computational accuracy. 2. Task 1: Line Detection Using MATLAB Built-in Functions 2.1 Task Description The line detection in this study is performed using MATLAB in-built functions [3], including edge, hough, and houghpeaks. These functions provide an easy way to detect edges in images using the Hough Transform. [4], as well as important line extraction. This section merely provides a general idea of how MATLAB's basic tools are utilized to efficiently detect and extract lines. 2.2 Image Preprocessing and Edge Detection The first step in line detection is generally the edge detection in the original image. We apply MATLAB's edge function to this, which identifies areas of abrupt intensity changes in an image, which typically correspond to edges. The edge detection must be accurate since it directly affects the precision of the subsequent line detection. Thus, the preprocessing process involves selecting the appropriate parameters for the edge detector to get clear, visible edges.   2.3 Hough Transform The second step is the application of the Hough Transform. using MATLAB's inbuilt function hough after edge extraction [5]. It transforms points on detected edges from the original image into a collection of points in a second parameter space. Specifically, it parameters lines in two modes: by distance r and by angle θ. Each edge point in the image space corresponds to a set of points in this parameter space, and the intersections in the parameter space indicate the presence of straight lines in the original image. The locations of the salient lines can be accurately obtained by inspecting these intersections using the houghpeaks function..   2.4 Line Extraction and Parallelism Analysis The next step involves using the houghpeaks function to identify the peaks in the Hough accumulator array, which correspond to the most prominent lines in the image. We extract the parameters r and θ for these lines. To analyze whether the detected lines are parallel, we check if the values of θ are identical or very close. Parallel lines should have nearly the same θ values, differing only by small numerical inaccuracies. If the difference between the θ values of two lines is smaller than a defined threshold, we consider the lines to be parallel. 2.5 Results and Analysis With the capabilities offered by MATLAB [3], we were able to select lines from the image and verify whether they are parallel or not. The experimental results show that in Figure 1a, all the lines detected from the image are of 0-degree angle, which indicates that these detected lines are parallel to each other. That is, a 0-degree line is horizontal, and hence in this case, all the image-detected lines are parallel to each other. Whereas in Figure 1b, the lines drawn are of angles 5°, 0°, -5°, and -10°. The lines are of different angles and thus the angle between them is not the same. Therefore, in conclusion, they are not parallel to each other. As can be seen from the above results, even slight differences in the angles can lead to significant differences in whether the lines are parallel or not. Figure_1a Figure_1b 3. Task 2: Line Detection Using Custom Functions 3.1 Task Description The second task is to do edge detection, Hough Transform, and line extraction using our own customized functions. We wish to show a better grasp of the Hough Transform. process by implementing our own customized version of the algorithms. We develop our own customized versions of the edge, hough, and houghpeaks functions, namely myedge, myhough, and myhoughpeaks. 3.2 Custom Hough Transform. Implementation Our program begins by defining our own function, myEdge.  To find edges, this function works like MATLAB's built-in edge detector but in a simpler way regarding how edges are found in the image.  It essentially finds edges by computing gradient data and thresholding functions. Secondly, to use the Hough Transform, we call the main function, myEdge.  The function assigns each edge pixel detected in the image a location in a two-dimensional parameter space.  The parameter space keeps possible locations of lines in terms of parameters, typically by r and θ.  Each edge pixel votes for cells in an accumulator matrix based on its corresponding parameters.  Over time, the accumulator becomes saturated, with higher values representing greater potential lines. Once the accumulator is filled, then comes detecting the dominant lines through a user-specified peak-detection function called myHoughPeaks.  The function finds the most voted cells within the accumulator matrix corresponding to the strongest and most probable lines in the input image.  Peaks are identified as local maxima and thus represent the strongest and most probable lines.  By reversing the parameter space correspondence to the original image coordinate, we have real values of the parameters r and θ for all the lines found. 3.3 Code Explanation We provide a brief explanation of our own custom functions as given below:  Edge Detection: The custom myEdge function uses gradient calculations to detect edges by comparing neighboring pixel intensities.  The output is a binary edge map.  Hough Transform. The myHough function iterates through the edge points, calculates the corresponding r and θ, and stores these in an accumulator matrix.  Peak Detection: The myHoughPeaks function scans the accumulator matrix for local maxima and extracts the parameters of the detected lines. 3.4 Computational Complexity Analysis We study the complexity of the custom Hough Transform. algorithm on the basis of the following main steps:  Edge Detection: The edge detection algorithm tests every pixel in the image.  This generates a time complexity of O(N×M), where M and N are the dimensions of the image.  Hough Transform. Hough Transform. contributes a time complexity of O(N×M×T×R), where T and R are the discretization steps along the angle θ and distance r.  Peak Detection: The peak detection process involves scanning the Hough accumulator matrix, which has a complexity of O(T×R). From the analysis of the three major stages' complexity, the time complexity of the tailored implementation is O(N×M×T×R).  This makes the implementation relatively computationally expensive, especially for high discretization values of θ and r and large images. 3.5 Performance Comparison When comparing the implementation of our own with MATLAB's native functions, we noticed that our own solution performs slower since it incorporates extra steps of computation which are less optimized compared to MATLAB's built-in processes.  However, the largest advantage of developing a custom procedure is having more control over parameterization and the way the Hough Transform. behaves.  On the other hand, MATLAB's built-in tools are carefully optimized for speed, efficiency, and accuracy and thus more appropriate for handling large data sets or operations that must be performed quickly. We observed in our tests that nearby or parallel lines were well detected by the self-developed line detection routines at the expense of increased processing overhead.  For instance, in one of our test images (Figure 1a), eight lines were well detected to be perfectly horizontal (0°), and in our second test image (Figure 1b), the self-developed function was able to detect lines at 5°, 0°, -5°, and -10° successfully.  These results affirm that our own application can successfully detect lines accurately, closely simulating the real orientations present in the images. 4. Results and Discussion Both our implementation and the built-in version of the Hough Transform. were able to detect the prominent lines in the test images. The results of both methods are comparable, showing that our own tailored algorithms function properly versus built-in functions. One of the advantages of using our self-created functions was that we were able to directly control and adjust some of the parameters in the algorithm such that we were able to test and better understand how selections of parameters would influence the detection outcome. In deciding whether the lines that were computed were parallel, we compared the angles θ obtained from both versions. The results show minimal variations in corresponding line angles, which means that both implementations were stable and consistent in determining line orientations, and it was easy to verify the parallelism of the detected lines. Another interesting fact is the real-world trade-off between convenience and control. While MATLAB's built-in functions give instantaneous and accurate results, the user-defined implementation is useful for providing flexibility, particularly in academic work and complex applications with special algorithmic modifications. 5. Conclusion and Future Work Both the default and our own custom versions of the Hough Transform. performed adequately in line detection in the sample image. But implementing our own custom functions provided us with a deeper insight into the internal processes and mechanisms of the algorithm, and enabled finer tuning. Future development can make our custom implementation more efficient by optimizing for faster computation and lower processing time. Besides, future work might include making the algorithm more robust to handle unfavorable conditions, for instance, noisy images or images with complex backgrounds. References 1.Aggarwal, N. and Karl, W.C., 2006. Line detection in images through regularized Hough transform. IEEE transactions on image processing, 15(3), pp.582-591. 2.Duan, D., Xie, M., Mo, Q., Han, Z. and Wan, Y., 2010, October. An improved Hough transform. for line detection. In 2010 International Conference on Computer Application and System Modeling (ICCASM 2010) (Vol. 2, pp. V2-354). IEEE. 3.MATLAB Documentation for Hough Transform. Functions. 4.Zhao, K., Han, Q., Zhang, C.B., Xu, J. and Cheng, M.M., 2021. Deep hough transform. for semantic line detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), pp.4793-4806. 5.Zheng, F., Luo, S., Song, K., Yan, C.W. and Wang, M.C., 2018. Improved lane line detection algorithm based on Hough transform. Pattern Recognition and Image Analysis, 28, pp.254-260.

$25.00 View

[SOLVED] IFB201TC Inventory and Warehouse Management

IFB201TC Inventory and Warehouse Management Tasks: Section A: Assume you have a shop and one of items you are selling is hair dryers. Please use what you learned in inventory related lectures to answer the following questions: 1. There is a decision-making difficulty for the choice of the right order quantity. Based on what you learned in the class, you think EOQ might be a good starting point for this inquiry. You choose to assume the demand is constant throughout the year. From the sales record provided by the company, you find in last year, the company sold 20000 hair dryers. The company purchases the hair dryers from its supplier at 20 RMB/dryers. You further estimate the costs of placing an order is 400RMB each time, and in average the monthly cost of holding inventory per hair dryer is around 4% of purchase cost per hair dryer. Based on the information collected, if back order is not allowed, how many hair dryers will you suggest to be ordered each time and why? Please round your answer to 3 decimal places. 50 words limit. (15 marks) 2. In the first scenario, you hired two experts (Alex and Steve) for further analysis. Alex estimates the annual cost of back order per hair dryer is 50% of purchase cost of each hair dryer, whereas Steve believes that the annual cost of back order per hair dryer is 150% of purchase cost of each hair dryer. If back order is allowed, then: · how many hair dryers will you suggest to be ordered each time for Alex case? · how many hair dryers will you suggest to be ordered each time for Steve case? · What is the difference between two above cases that you can observe it? Please round your answer to 3 decimal places. 100 words limit.  (25 marks)  3. Similar to A1, again assume that back order is not allowed anymore. Also, assume a monopoly in supplier side in which there is one supplier of this very new type of hair dryer (there is no simple supplier substitute). The supplier has many customers in addition to you. Consequently, in order to prioritize yourself to other customers, you should offer extra money to the supplier according to its following price table: · The current price of 20 RMB per dryer is only valid if you order less than 1000. · The supplier increases unit price for 5% if you order between 1000 or more dryers per order. In these circumstances what should their ordering arrangements be? Please round your answer to 3 decimal places. 50 words limit. (25 marks) Section B: Assume you are also running a university restaurant independently. Every morning, you make good quality Steamed Bun, and you estimate the cost of one steamed bun is around 1 RMB. The sales price for each steamed bun is 4RMB, but if in the evening, there are steamed buns unsold, you have to dispose of them as they are not fresh anymore. Currently, you do not know how many steamed buns you should make in the morning. 1. Suppose the daily demand of steamed buns is normally distributed with mean of 100 and standard deviation of 20, how many steamed buns would you like to prepare each morning and why? Please round your answer to 3 decimal places. 50 words limit. (10 marks) 2. Starting from the newsboy formula and your answer of 1, please discuss the possible solutions to reduce food waste and its negative impacts. 150 words limit. (25 marks)

$25.00 View