Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] 31250 Introduction to Data Analytics Assessment Task 2Python

Assessment Task 2: Data exploration and preparation Scenario You have just started working as a data miner/analyst in the Analytics Unit of a company. The Head of the Analytics Unit has brought you a dataset [a welcome present:-))]. The dataset includes two files: a description of the attributes and a table with the actual values of these attributes. The Head of the Analytics Unit has mentioned to you that this is some sort of loan data that a potential client has provided for analysis. The Head of the Analytics Unit would like to have a report with some insights about the data, that he/she could deliver to the client. Your tasks include: Understanding the specifics of the dataset; Extracting information about each of the attributes, possible associations between them and other specifics of the dataset. The tasks in the assignment are specified below. Tasks A. Initial data exploration A1. Identify the attribute type of each attribute in your dataset fage, loan default, asset_cost, ., marital status) (nominal, ordinal, interval or ratio). If it's not clear, you may need to justify why you chose the type. A2. Identify the values of the summarising properties for the attributes, including frequency, distributions, min, max, medians, means, variances, standard deviation, percentiles, etc. - the statistics that have been covered in the lectures and materials given. Note that not all of these summary statistics will make sense for all the attribute types, so use your judgement! Where necessary, use proper visualisations for the corresponding statistics. NOTE: If there are missing values, demonstrate their impact on the summary statistics of the attributes. A3. Using KNIME or other tools, explore your dataset and identify any outliers, clusters of similar instances, "interesting" attributes and specificl values of those attributes. Note that you may need to 'temporarily' recode attributes to numeric or from numeric to nominal. The report includes the corresponding snapshots from the tools and an explanation of what has been identified there. NoTE: If there are missing values, demonstrate their impact on the data exploration. Present your findings in the assignment report. B. Data preprocessing Perform. each of the following data preparation tasks (each task applies to the original data) using your choice of tool: B1. Use the following binning techniques to smooth the values of the "age" attribute: Equi-width binning Equi-depth binning. In the assignment report, for each of these techniques, you need to illustrate your steps. In your Excel workbook file place the results in separate columns in the corresponding spreadsheet, Use your judgement in choosing the appropriate number of bins - and justify this in the report. B2. Use the following techniques to normalise the attribute "asset_cost": min-max normalization to transform. the values onto the range [0.0-1.0] z-score normalization to transform. the values. The assignment report provides an explanation of each of the applied techniques. In your Excel workbook file place the results in separate columns in the corresponding spreadsheet. B3. Discretise the "PERFORM_CNS.SCORE" attribute into the following categories: Very Low Risk, Low Risk, Medium Risk, High Risk and Very High Risk. Provide the frequency of each category in your dataset. The assignment report explains each of the applied techniques. In your Excel workbook file place the results in a separate column in the corresponding spreadsheet. B4. Binarise the "marital status" variable [with values "O" or "1"]. The assignment report explains the applied binarisation technique. In your Excel workbook file place the results in separate columns in the correspnding spreadshee. C. Summary At the end of the report include a summary section in which you summarise your findings. The summary is not a narrative of what you have done, but a condensed informative section of what you have found about the data that you should report to the Head of the Analytics Unit. The summary may include the most important findings (specific characteristics (or values) of some attributes, important information about the distributions, some clusters identified visually that you propose to examine, associations found that should be investigated more rigorously, etc.). Deliverables The deliveries are: A report, for which the structure should follow the tasks of the assignment An Excel workbook file with individual spreadsheets for each task (spreadsheets should be labelled according to the task names, for example, "B"). Each of the results of parts (B1) through (B4) in task B should be presented in a separate spreadsheet (and respectively table in thel assignment report). A Knime workflow (ida_xxx.knwf) or Python notebook (ida_xxx.ipynb). In the report, include a section (starting with a section title) for each of the tasks in the assignment.

$25.00 View

[SOLVED] conservation and preservation of art

Course Description How are art collections and museums formed? Who decides what a museum exhibits? Is a museum like a bank vault filled with precious objects, or is it more like a secular cathedral? This course will address these questions by surveying the history and philosophyl of art collections and museums. Topics include: public, private and corporate art collections; the conservation and preservation of art; museum architecture; installation design; traveling exhibitions; museum education programs; exhibition catalogs; museum trustees; laws that impact museums; commercial galleries and non-profit artists' spaces.

$25.00 View

[SOLVED] FINM025 proposal requirements

FINM025 proposal requirements Heading Main subheadings Contents/aims Estimated number of words Introduction and background to the study and Rationale 1- Introduction 2- Background, Context and Rationale   This section should provide the background which outlines what your proposed research is about and what it is you are seeking to discover/achieve. It should be a brief introduction outlining the general area of study and identifying the subject area within which your study falls. You should also refer to the current state of knowledge (i.e. what research has been done to date) and any recent debates on the subject. Theories and empirical studies should be informing the entire introduction, background and rationale.  This section should demonstrate and explain: · How much is already known about the problem and why is it important? · Define key concepts where appropriate. · What is missing from current knowledge and why? · What new insights will your research contribute? · Why is this research worth doing? · Why your research is important – it is not enough to say that this has not been studied previously, you need to explain why it should be studied ie. why it is interesting/important.  · How your research ‘fills a gap’ in existing research (ie. show that it hasn’t been done before).  · Who has an interest in the topic (e.g. scientists, practitioners, policymakers, particular members of society)? 1100 words Aims and Objectives   Your aims should be broad statements of desired outcomes or the general intentions of the research. They need to emphasise what is it be accomplished, not how it will be accomplished. They should also address the long-term project outcomes. The research aim should be aligned with the research title. 1 research aim and 3 or 4 objectives are optimal. Objectives are then the steps you plan to take to answer your research questions, or a specific list of tasks needed to accomplish the goals of the project. They: · Emphasise how aims are to be accomplished · Must be highly focused and feasible · Address the more immediate project outcomes · Make accurate use of concepts and be sensible and precisely described · Are usually numbered so that each objective reads as an 'individual' statement to convey your intention Example: Aim: To investigate the regulatory determinants of Fintech in emerging economies. Objectives: To review the literature to find potential regulatory factors unique to the emerging economy        To ascertain the best model to study the relationship between factors identified and Fintech        To analyse the findings and contribute to the research area 300 words Research Methods   1- The research design/ methodology section should describe the overall approach (theoretical framework) and practical steps (methods) you will take to answer your research questions. 2- Data Collection and Analysis. 3- Ethical considerations.  Students should ensure that they justify the method using the relevant literature in their area of interest. · You might want to consider some of the following prompts: · What are the main research questions? · How can I answer them? · What is my research approach? (Quantitative, or Qualitative)? Why? · What is the research sample? Why? · Overall approach and rationale · Sampling, data gathering methods, data analysis For Quantitative studies: Data can be extracted from secondary sources such as DataStream, Bloomberg, Orbis, etc. but it needs to be specified. What are the empirically testable hypotheses associated with the research question? What is my study model (regression equation) specification? What is the definition of each study variable? (Dependent, independent, control)? And how I can collect them? Sources? And proxies? For Qualitative studies: · Types: Primary data only for qualitative research eg: Individual interviews, participant observation, focus groups, etc. · Analysis methods vary depending on the qualitative approach eg: Content Analysis, Thematic Content Analysis etc. · Add details and more details about how data will be gathered and processed (procedures should be made public). · Is there any ethical consideration? 500 words /Research Plan You should include an outline of the various stages and the corresponding timeline for developing and implementing the research considering writing up your thesis. You might need to utilise some tools such as the Gantt chart 100 words

$25.00 View

[SOLVED] ESS105 Story Map

ESS105 Story Map This assignment is intended to increase your skills in scientific literacy, the ability to use evidence and data in order to evaluate scientific information. Your job is to take a complicated topic and make it both simple and interesting for your audience. Consider your audience as your peers , these are all non-science students. This is an individual assignment, not group work. Here is where you get to reap the benefits of having worked hard all semester long. The Story Map will contain your improved Individual Map and the text from your improved Rough Draft in addition to new images and videos that help to tell your story. Note: if you did your Rough Draft in Google Docs please download it to your machine before you attempt to copy and paste into the Story Map or your work won't save properly. You will need to add new text to introduce those images, maps, audio, and videos within the body of your text. So for instance, when you go to discuss something that is related to your first map or image, simply put (Fig 1) in brackets at the end of that sentence - that is how you tell the reader it is time to go and look at the map/figure etc now. Figure/video/map/audio captions will also be required, each caption needs to contain the main point of the figure/video/map/audio as well as the in-text citation for its source. Here is a sample image that I have put with a proper figure caption for you to see what I mean (Fig 1): Fig 1: This is the newest available map of the tectonic plates, note that it includes the traditional rigid plates as well as microplates and deformation zones (Hasterok et al, 2022). I would also have to add in a matching end reference in my end refs list for that figure as follows: Hasterok, D., Halpin, J. A., Collins, A. S., Hand, M., Kreemer, C., Gard, M. G., & Glorie, S. (2022). New maps of global geological provinces and tectonic plates. Earth-Science Reviews, 231, 104069. https://doi.org/10.1016/j.earscirev.2022.104069 Do the same citation style. for images/video/audio as you would for any other in-text citation and end ref, APA 7th edition format. Your Story Map needs to contain your references, both in-text and end references, in the APA format. It will once again contain the Ulrichs Web screenshots. Wondering how to cite images or videos in the APA format? https://owl.purdue.edu/owl/research_and_citation/apa_style/apa_formatting_and_style_guide/reference_list_audiovisual_media.h tml Remember how important it is to use a SYNTHESIS style. versus a SUMMARY style. when you are writing. You are intending to convince your reader of something, to tell them about something - not just present a series of semi-related facts. The top skills that you will be assessed upon include: choosing and using peer reviewed sources, citing properly both in-text and in the end references in the APA 7th Edition format, and your inclusion of geoscientific content - properly paraphrased - from those scientific sources. As with your rough draft, you need to include the Ulrichs Web screenshots directly following each peer reviewed journal reference entry. You must include your individual map(s) in your Story Map. As always, there is a marking rubric on Quercus. Helpful tips from Lisa: remember to NOT copy and paste directly from Google docs - download it to your own machine first, remember to publish your story map, remember to share your story map to the class group, don't use the built in references section - because you can't add your Ulrichs screenshots in. Just use regular add text and add image to build your own ref section.

$25.00 View

[SOLVED] Simulink Simulation Converter circuitsMatlab

Matlab Simulink Simulation: Converter circuits Objective: To use Matlab for supporting the learning in Power Electronics circuits. Installing Matlab and Simscape Electrical app: download and install Matlab from this link if you have not done so: When you install Matlab, make sure you also install Simulink and the Simscape Electrical app in the package. If you already have Matlab and Simulink installed, but not the Simscape Electrical app, you can go to the APPS tab, click on "Get More Apps", as shown below In the pop-up window, type in "Simscape Electrical" in the search box located on the upper righthand corner, and clear search filters by clicking the "x" on "Clear Filters", as shown below, before starting your search. Once you see the "Simscape Electrical" app in the search result, you can start installing the app. Simulation Procedures for newer Matlab versions (e.g. R2023b or R2024b): 1.    Start Matlab 2.    Start Simulink by typing in “Simulink” in the command windon as shown in Fig. 7. Simulink will launch, as shown in Fig. 8. Fig. 7 Starting Simulink Fig. 8 Simulink window 3.   Start a bland model by clicking “Blank Model” button. A new blank Simulink window will launch. 4.    In the new window, click “Library Browser” button on the top menu, as shown in Fig. 9. Fig.9 New Simulink window with library browser Most of the required blocks are under “Simscape/Electrical/Specialized Power Systems/Power Electronics” . You can also launch a standalone library browser as shown in Fig. 10. Fig. 10  Power Electronics library 5.   In addition, you will also need other common Simulink blocks, such as the “Scope” block in the “Sinks” for your project as well.  They are under the Simulink library at the top of the   library list. Fig.  11 The blank Simulink model 6.    Drag and drop the “powergui” block from “Simscape/Electrical/Specialized Power Systems” in Simulink Library Brower (Fig.5) 7.    Construct a simple single-phase half-wave full-control converter in the new model. 8.   You may need to the following blocks for constructing the converter. 9.    Set the blocks with the proper values AC Voltage Source: Peak amplitude (V), Phase (deg), Frequency (Hz) Pulse Generator: Period (secs), Pulse Width (% of period), Phase delay(secs) RLC Branch: Branch type, Resistance (ohm), Inductance (H) Scope: Number of axes 10. Your model should look similar to the one shown in 12 Fig. 12 Single-Phase Half-Wave Full-Control Converter 11. Change the simulation stop time to 1 (sec). 12. Save your model and click the green “run” button on top of the menu to start the simulation. 13. To see the results, you may need to double-click the “scope” icon.  Your results should look similar to Fig. 13 Fig.  1 Results for the model shown in Fig. 6 Simulation Procedures for older Matlab versions (e.g. R2020a) : If you are using a newer version, such as R2023b, please go to the next section. 1.    Start Matlab 2.    In order to use the Simulink models contained in Simscape Electrical blocks, you need to load the powerlib library. In the Command Window, type powerlib to load the power electronic library as shown in Fig. 2.  The powerlib is shown in Fig. 3, and the blocks in the Power Electronics library is shown in Fig. 3. Fig. 2 The powerlib in Matlab Fig. 3 The powerlib library Fig. 4 The general Simulink Library A 3.   In addition, you will also need other common Simulink blocks, such as the “Scope” block in the “Sinks” for your project as well.  They are under the Simulink library at the top of the   library list. Fig.  5 The blank Simulink model 4.    On the top of the Matlab window, click “Simulink Model” in the “New” button 5.    Drag and drop the “powergui” from the powerlib to the model (Fig.5) 6.    Construct a simple single-phase half-wave full-control converter in the new model 7.   You may need to the following blocks for constructing the converter. 8.    Set the blocks with the proper values AC Voltage Source: Peak amplitude (V), Phase (deg), Frequency (Hz) Pulse Generator: Period (secs), Pulse Width (% of period), Phase delay(secs) RLC Branch: Branch type, Resistance (ohm), Inductance (H) Scope: Number of axes 9.   Your model should look similar to the one shown in Fig. 6 Fig. 6 Single-Phase Half-Wave Full-Control Converter 10. Change the simulation stop time to 1 (sec). 11. Save your model and click the green “run” button on top of the menu to start the simulation. 12. To see the results, you may need to double-click the “scope” icon.  Your results should look similar to Fig.  7 Fig.  7 Results for the model shown in Fig. 6 Discussion: 1.    Based on the results from Matlab, find the mean voltage of the converter 2.    Compare the mean voltage evaluated in Matlab and that from the theoretical calculation. a.    Write down the equation for calculating the mean voltage. b.    Find the mean voltage based on the theoretical equation 3.    Discuss any differences identified from the two results. Report There is no report required for this lab.

$25.00 View

[SOLVED] Exploring Map Projections and Geographic Coordinates in ArcGIS ProSPSS

Session #5: Exploring Map Projections and Geographic Coordinates in ArcGIS Pro Investigating and defining projections in ArcGIS Pro. Download the Projections.zip file form. the HW data folder on Canvas. The project deals with Coconino County, Arizona which includes Flagstaff, Arizona and the Grand Canyon. You are faced with three GIS data layers in three different spatial references (Projections). Your job will be to get them all to register in the same map frame. The map in the left shows the general area of Coconino County, while the map on the right shows what is expected to appear when you open the Projections ArcGIS Pro Project. At present, only the yellow county layer itself is in the Coconino County map. Open the Projections project from the aprx file in the project folder, “Projections”, that extracted from projections.zip. Open the properties for the county layer by right clicking on the name of the layer in the Contents pane, then selecting properties. Click the source tab on the left and look over the data source, extent, and spatial reference properties. This layer has a spatial reference defined and will be the one we use as a base for getting the other  layers to register in the correct location in our map. Question 1: What is name of the Projected Coordinate System (spatial reference) of the county map layer? Close the properties window. Using the Catalog, drag the roads layer into your Coconino County map.  You will see it added to the Contents Pane, but does it appear on the map?  This is usually bad news and suggests that the projection (spatial reference - SR) is either not defined or is defined incorrectly.  Even if this layer was in a different SR from the one you identified for the county layer, if it were defined correctly, it would be projected “on-the-fly” to fit with the existing projection. Unfortunately, it did not do that, so, let’s investigate. Open the properties for this layer (roads) as you did earlier when looking at the county layer. Question 2: What is the spatial reference of the roads layer? Question 3: What is the numeric value for the top extent of the roads layer? (You can view this by viewing the “extent” section of the “source” section within the layer properties) Finally, add the places layer from the Geodatabase. It too appears in the Contents pane, but not in the map, so we must worry that this one is also not correct. Question 4: What is the spatial reference of the places layer? Question 5: What is the numeric value for the top extent of the places layer? Use the zoom to full extent and see if you can find where the three parts are each located.  They are located where their “extents” say they are located, so they may or may not be easy to see.  I expect that you will see the places and county, but likely not the roads. You can return to the properties for each of the layers to answer the questions below if that helps you find them. You may also open the Roads attribute table to select several rows within the roads layer – the selection may help to make them more visible within the map view after zooming to full extent a second time (make sure to clear any selections when done) . Question 6: The places layer appears to be far ________   of the county layer     (North, South, East, West) Question 7: The roads layer appears to be far ________ of the county layer      (North, South, East, West - this layer will be the most difficult to see) Finally, use the right-click - zoom to layer option in the Contents pane on each of the layers to see that they are indeed drawing on the screen, just not together, as we need for GIS analytical work. When you move the mouse around each layer, note the coordinates at the bottom of your screen. This display is converting the numbers in each file to latitude and longitude coordinates, but clearly the roads and places layer do not appear where they should. Remember that the coordinate system we want is the one used by the county layer, so we will consider the coordinates and location of the county layer as “correct”. As it is clear that the three layers are not going to work together in our project, we need to get a little more information about how coordinate systems work in ArcGIS, tie it to the lectures and lab on projections we just finished, make educated guesses about the projections of the roads and places (since the county is the only projection we know to be correct), then try to confirm them as correct, and set them to get them to overlap. Basic Rules for Georeferencing in ArcGIS To be used in analyses, ArcGIS Pro layers must have a file (or information inside the file itself in geodatabase) that defines the spatial reference of the native coordinates in the layer. The native coordinates are those hidden inside the field called “Shape” (where it says Polyline at right) in the table for a feature class. You cannot see them, but they determine where the map draws in space. These data are called Metadata: data on data. An example of a Geographic Spatial reference file is below on the left, the image on the right shows the same spatial reference file, but it has been modified into the separate parts to help illustrate how the information is delimited by brackets and commas. The parts should be intelligible to you from the lectures we just completed. The spatial reference has a GEOGCS – a geographic coordinate system, a DATUM which defines the SPHEROID name, radius and flattening, a starting point PRIMEM and a UNIT of measurement. Question 8: According to the spatial reference file above, what is the flattening in the WGS_84 SPHEROID? (Include up to 4 decimal places) An example of a Projection Spatial reference file is below.  It contains the PROJCS (projected coordinate system) which includes the GEOGCS as well as we must know what GEOGCS we used to measure the original data.  The standard lines are also defined in the description as PARAMETER. The file below can be read in separate parts delimited by the brackets and commas like the first example above. Question 9: According to the spatial reference file immediately above, what is the latitude of northern standard parallel for the USA_Continguous_Albers_Equal_Area_Conic_USGS_Version (Include the decimals places) If you download a file from the internet and there are no metadata, the spatial reference is considered “unknown” .  How would you determine what it should be?  In a Geodatabase, the metadata are stored (in a proprietary format) inside a folder (file Geodatabase) so you cannot look for a file.   Instead, you need to look up the properties of the layer in as you already did above.  You can always look up the properties for all types of map data in ArcGIS: If your spatial reference is “unknown” when you add the layer to the map, then ArcGIS will add the layer, but  it may appear in the wrong place, at the wrong scale, or not at all.  You just saw that problem when you added the Roads and Places layers to the map earlier in this assignment.  Unless you just need to look at the appearance of the “map” which you can do with right click - zoom to layer as you did above, you must correctly define the projection using a tool in ArcToolBox. If you plan to use your data for any form of analysis it is critical that you do not pass these issues by without fixing it!!  It will bite you badly later on.  The following pages will guide you through the process of investigating and fixing the problem. If a spatial reference is attached to the file but is incorrect, then what?  The only time this is likely is if YOU set the reference wrong due to a lack of understanding of how ArcGIS works with spatial references.  This happens a lot at the beginning of your getting comfortable with ArcGIS, so we will look at setting up spatial references in this homework. Setting up a Spatial Reference 2 steps: •     Step 1: The first one is the hard one – decide what the spatial  reference is (remember that geographic coordinates (lat/long - GCS) are also spatial references) and make an intelligent guess. Most data for local areas in the US are either geographic (NAD83, WGS84, etc.) or in one of the common projection systems we just discussed in the lectures: •     Universal Transverse Mercator (UTM) •     State Plane Coordinates •     Foreign layers may be in a national grid for that country, UTM, or in Geographic. You can rarely be certain about a spatial reference but can make smart guesses and then test those guesses to confirm they are reasonable. This assignment will guide you both parts of that process, making a guess and testing it. •     Step 2: The second step is the easy one - use the Define Projection tool to define the projection. This tool is in the Data Management - Projections and Transformations toolbox as seen at right.  Using this tool this too quickly, before you are reasonably sure of what coordinate system to choose, is the biggest mistake early GIS users make. So, we will experiment a bit with this map of Flagstaff, AZ and the vicinity. A few clues to look for in guessing well: The systems are quite different in the coordinates they use: (These differences are worth knowing well) -    Geographic coordinates use angular units (degrees) that have a very limited range of values, and they are the only common ones with small numbers like that.  A map in a Geographic Coordinate System (GCS) will show a small variance of values within the range of values listed below: •     X (longitude) must range from -180 to +180 •     Y (latitude) must range from -90 to +90 -    UTM: the next easiest to apply as all zones are standard. •     X range is usually 0 - 1,000,000 (500,000m E occurs at the Central Meridian and we know where the central meridians are located by longitude) •     In the northern hemisphere (the US for example), Y measures meters north of the equator and there are 10,000,000 meters from the equator to the pole (the definition of a meter!).  So, a rough estimate is that 5,000,000 would be halfway to the pole - about  equal to latitude 45°N. -    SPC – hardest to tell because there are no specific origin placements like in UTM, but traditionally the values are in feet and may be larger than for UTM, especially in eastings.  Make this your third guess generally only after you have eliminated the other options. Investigation in ArcGIS For feature data (vector), simply look at the extent properties as you did above for the roads and the places.  It will tell you the range of values in the file’s native coordinates. Right click on the county layer in the Contents pane.  Select properties, then the source tab. The units of the coordinates for the county layer are defined.  We will use this spatial reference as the projection we want in the end with the goal of making the other two layers match it when our process is complete. Fill in the table below: Write the extent coordinate range for “county” in the table below so you have a   record of their current values. Repeat for the places and roads layers.   Note that they are not the same at all (Filling in this table will make it significantly easier to identify the ranges of each layer and the differences between them): Find the three layers on the map to get an idea of how they differ, and by how much: Now that you have all the numbers in one place, think through the differences while considering the clues given at the top of this page. The goal is to identify how the location of the layers in the map are incorrect while you can also look at the numbers, eventually making a reasonable guess at the units used by each of the layers. Panning to/from each layer while doing this may help. Right click on county in the TOC and “zoom to layer” .  As this one is defined, this is the view we want, but for GIS we require that the roads and the places align with the county. Right click and “zoom to layer” for each of the other two to be sure they appear OK. Finally, click the zoom to full extent button (globe on the Map ribbon). What happens? Remember this full extent view!!!  You will likely make this mistake at some point this semester. Zoom to full extent is an excellent tool to help you identify the source of a problem in the future. Match the two unknown layers to their correct projections. Though you would not normally know this without knowing more about the source of the data, each of the two unknown projections are either a correct UTM (NAD_1983_UTM_Zone_xxN) or GCS (GCS_North_American_1983.) This task makes your task is a lot easier by narrowing down the potential options, you just have to decide which is which.  As mentioned earlier, guessing is the first step and proving likelihood/reasonableness of the guess is the second. How to find the Geographic Layer: Start out by determining the appropriate latitude and longitude for this part of the US.  To do that, pick a place in the county area.  Open a web browser and search for “Flagstaff, AZ latitude” .  That will give you a good idea of where we are looking to be. Since you will define the coordinates for each of the undefined maps, it is IMPERATIVE that you are very confident of your guesses before you define them.  As noted above, if you define them incorrectly, you must go back and correct the issue– nothing will fit as it should in the map!!!, or worse they will sort of fit, well enough that you may not notice the error, and all your results will be wrong. ArcGIS does not know (or care) if you are right when you define a spatial reference, but you certainly do!!!  Since geographic is the easiest to determine, begin with the roads.  These numbers (look back at your table you filled in above) look like they might be a GCS (due to the small range of values) .  They are relatively small values and in the appropriate range around Flagstaff’s latitude and longitude.  Since I told you that was one of two choices, you already know this is true, but let’s do the work to be sure so we could do this when on our own in the future.  So, let’s test that hypothesis. Zoom to the roads layer and find the intersection of US Hwy 180 (W Fort Valley Ranch Rd) and AZ 64 (State Rte 64), right click the roads layer in the TOC and turn on the labels, to help find it. Enter the “latitude and longitude” at that intersection from your cursor in ArcGIS – read the display at the bottom center of the map view (the numbers in the example to the right illustrate what you are looking for, the values in the example to the right are not correct ) Latitude?? From ArcGIS Longitude?? From ArcGIS     This appears to be geographic, and it is near the Googled location of Flagstaff, so we are still fairly confident.  To confirm, we need a comparison from a known map source.  Let’s try Google maps. Search Google maps (http://maps.google.com) for Bedrock City, AZ then pan to the intersection of US 180 and AZ 64. In google maps, right click to view the coordinates or set a point on the intersection and select what’s here. Compare it to the ArcGIS Pro reading that you entered in the table above (again, the example at right is not the correct location, but shows you what the information google will provide may look like). If the coordinates from the point on google maps and the coordinates from ArcGIS Pro are somewhat close, you can be pretty certain the roads layer is Geographic (NAD83) as opposed to UTM. We’ll go with that.  Make a note that roads are geographic. Guessing for a UTM layer We know: •     latitude is nearly equal in distance per degree (parallels are nearly equally spaced) •     All northern UTM zone Y coordinates start with 0 at the equator •     There are 10,000,000 meters from the equator to the pole Zoom to the Places layer and find Flagstaff, turning on the labels or a basemap may help - click in the polygons and view their popups until you find it - use the map on the front of this assignment if you need a reference for where to look.  We need to get a coordinate for a point on this map and compare it to a known value as well. Zoom back to Flagstaff on your Google map, and in ArcGIS zoom to layer, then zoom in to see the Flagstaff polygon. Select a clearly visible corner point on the outline of your Flagstaff polygon in ArcGIS and record the coordinates in ArcGIS Pro (the screen says something we cannot trust, but we know that these are the native units for this layer  and we think they are UTM meters). Easting?? From ArcGIS Pro Northing?? From ArcGIS Pro     So, for the places layer, if you use a Y coordinate from ArcGIS Pro, you could estimate the approximate latitude from this ratio.  If that latitude is somewhat similar to what you looked up earlier for Flagstaff on google maps (35.1°N), then we could corroborate that map is likely UTM. For example: IF the ‘y’ value (latitude) you get from google maps for the corner of flagstaff circled in the image above is 35.1 N then since we know that google gives the values in degrees, we can convert this value to meters for comparing with UTM using the following formula “degrees / 90 * 10,000,000 = meters (UTM)” 35.1(dd) / 90 * 10000000 = 3,900,000(m) Since the projection does alter this a bit and 1° of latitude is not exactly the same all along, we only need to be close at first. Now that we have decided that we believe that the units are UTM we must identify which UTM zone would be correct for our location. You can search google for maps and images of the UTM zones in the U.S., or use an image such as this one provided by the USGS https://www.usgs.gov/media/images/mapping-utm-grid-conterminous-48-united-states . Question 10: What is the UTM zone number for the Flagstaff area? *The tools used in the steps below have very different functions* -read the tool descriptions in ArcGIS Pro to be sure you are using the correct one Step 2: the easy one - define the projections (zoom to the County layer to see the results of the following steps) Now that we are more confident that we know the correct spatial references for the roads and the places layers, step 2 is easy.  In ArcGIS Pro, open Geoprocessing, then type Define Projection in the “Find Tools” search box. When it is found in the data management tools open the “Define Projection” tool. Use the information in the two paragraphs below to set up and run the tool. Follow the dialog for the tool to enter the layer name and then the desired spatial reference.  These can be a little hard to find, but you can track them through the lists as follows: For the Roads we need Geographic: click the globe button and follow the path (similar to what we did in the projections lab): “Geographic Coordinate System” > “North America“ > “USA and Territories” > “NAD 1983”, then click Ok, the “Coordinate System” parameter of the tool may show “GCS_North_American_1983”, then click Run Repeat for the places layer, and work through the menu as: “Projected Coordinate Systems” > “ UTM”  > “North America” > “NAD 1983 (2011)” > “NAD 1983 (2011) UTM Zone xxN” (replace the XX with the appropriate zone #), then click OK, then click Run Step 3: Set the Map projection so ArcMap knows what the drawing spatial reference should be: Right Click on the map called Coconino County in the Contents pane and select properties.  On the Coordinate Systems tab, you can use the existing layer “County” to set the projection you want for the entire map: NAD_1983_StatePlane_Arizona_Central_FIPS_0202 (US Feet).  This sets the projection of the map that is on your screen correctly.  This is most likely already correct, but this would be an important check before the final step of storing these layers correctly. Click “zoom to full extent” (Globe button next to explore tool) and it is “visually” a lot happier now as they all line up!!  BUT, they only look OK, the data are still not projected into the NAD_1983_StatePlane_Arizona_Central_FIPS_0202_Feet coordinate system and thus would not be ready for analysis.   Reopen the properties (source) for roads and note that the extent has not changed!!! *All three layers should appear to be in the correct location before beginning step 4. If they look good move on to step 4, if the layers do not appear in the correct locations, return to review step 2 (this page) to resolve that issue first. *Having a backup of your data is always helpful. If you have made an error in step two that you cannot easily fix, you can close the project, delete the project folder (”Projections”), and extract a clean copy from “Projections.zip”. Just add the data to the map and pick back up at step 2 (top of this page). Step 4 – Physically project the roads and places so the layers match for analysis The final steps after we made the correct guesses and tested them well is to convert the native coordinates of the files into new layers. We see a good map now as ArcGIS projects to the map spatial reference system on the fly when it knows the correct spatial referencing system for each layer.  The spatial reference metadata are correct from step 2 (Defining the projection).  Our final job is to make new files with native coordinates that are in the desired system.  To do this use the “Project” tool from Geoprocessing to convert the coordinates. •     Open Geoprocessing and find the Project tool by searching for “Project” .  This tool is right next to the one you used above in the toolboxes. •     Open the tool and select the roads layer as the input •     Name the new file Roads_SPC.  By default it will go in the Projections.gdb Geodatabase (hover your cursor over the output dataset to verify) •     Select the Current Map option as the spatial reference for output (we set that a few steps back so that is easy). •     It will add Roads_SPC to the map, you should remove the old one (Roads) to avoid confusion •     Repeat for the Places •     Name them places_SPC •     Confirm that the new Places_SPC layer appears in the contents pane and remove the original Places layer •     *If you have run the tool and either layer does not automatically appear in the contents pane, check the home geodatabase, find the SPC layer, and drag the new SPC layer to the map. •     You do not need to do the County layer, as it was already correct. Questions 11-22: Now that they are projected, what is the extent of each map layer under the new projections? Fill in the following table (questions 11-22) - integer portions are fine Enter your answers into the homework assignment quiz on Canvas by the posted due date.

$25.00 View

[SOLVED] Mathematics

1. Let  {an}be  a  sequence in an arbitrary metric space that both does not converge and has a subsequence converging to  a. a.Show  that  for  every ∈>0  there  are  infinitely  many  an  in  Ma. b.Show  that  there  exists  eo  such  that  for  everye  with  c0 > ∈>0  there  are  infinitely  many  an  not  in Ma. Remark: here”infinitely  many"refers  to  infinitely  many  distinct  indices  n.We  are  not  worried  about how many different points occur in the sequence but we are concerned about how many times the sequence does  something. 2. Let M,d be a metric space and define (a)Show  that p  defines  a  metric  on  M. (b)Show  that  the  identity  map  is  a  homeomorphism  from  M,d  to  M,p. (c)If(M,d)is   IR   with    the    standard   Euclidean    metric,i.e   d(r,y)=|x-y|prove    that   M,p    is   bounded. For extra credit,show that M,p is not isometric to an open  subset of R(See below  for definitions). These definitions are in Pugh and Rudin and were given in class,but maybe a bit hard to find,so: If   M,dM    and   N,dn    are    metric   spaces,a   mapf:M→N    is    an isometry if  it  is  a  bijection  and dn(f(x),f(y))=dm(z,y)for   all   z,y   in   M.If  there   is   an   isometry   from   M   to   N,we   call   M   and   N isometric. 3. Construct  also  compact  subset  of R  with  a  denumerable(i.e.infinite  and  countable)collection  of  cluster points.Also  construct  a  compact  subset  of IR  with  a  denumerable  set  of limit  points  but  only  a  finite collection of cluster points. 4. Prove  that   every  infinite  sequence  {xn} in  R  has   a  monotone   infinite   subsequence.Here  monotone   can either  be  increasing  by  which  we  mean  xn≤xn+1  or  decreasing  by  which  we  mean  xn≥xn+1.Notice these words are intending in a slightly odd sense so increasing really means not decreasing and decreasing really  means  not  increasing.

$25.00 View

[SOLVED] Pop Marts Business Strategies Individual Assignment 2

Individual Assignment 2 Instructions: Using the case study on Pop Mart,write a 1,600-1,800 word essay addressing the following three questions.Your essay should demonstrate critical analysis and incorporate real-world examples to support your arguments. Essay Questions 1.Pop Mart's Business Strategies: · Analyze the business strategies (3 levels) of Pop Mart.How do you evaluate the effectiveness of using these business strategies by Pop Mart to expand in China and around the world? 2.Nature and Importance of Leadership in Strategy: · Identify the traits and competencies of effective leaders demonstrated by Wang Ning.Analyze how Wang Ning's leadership has shaped Pop Mart's strategic direction. Why is leadership critical in ensuring the success of a company's strategy, particularly in a highly competitive toys and collectibles industry in the consumer discretionary sector? 3.Sustainability Initiatives of Pop Mart: · Explain why sustainability is important for a company like Pop Mart. Analyze how Pop Mart has implemented sustainability practices across its materials,supply chain,operations,and retail ecosystem,and discuss the challenges it faces in maintaining and advancing its ESG performance in a rapidly evolving consumer and regulatory environment. Assignment Guidelines · Word Count: 1,600 to 1,800 words (approximately 500-600 words per question). · Structure: Your essay should be well-organized,with a clear introduction, body (addressing the three questions),and conclusion. · Referencing: Use APA style for citations where applicable. · Submission Format: Submit      in     a     Word     or      PDF     document. · Deadline: Hard copy-Class  A01:4  Oct  (Saturday)before   12:15pm  during class.Class A02:5 Oct (Sunday)before 12:15pm during class.

$25.00 View

[SOLVED] CS-E4780 Scalable Systems and Data Management Course Project Efficient Pattern Detection over Dat

CS-E4780 Scalable Systems and Data Management Course Project Efficient Pattern Detection over Data Streams 1 Background Systems for event stream processing continuously evaluate queries over high-velocity event streams to detect user-specified patterns with low latency. Since the patterns become less important over time, it is crucial to detect them as quickly as possible. However, low-latency pattern detection is challenging, because query processing is stateful and the set of partial matches maintained by common algorithms for query evaluation grows exponentially in the size of the processed event data. Handling intermediate states during query evaluation is particu-larly challenging under dynamic stream conditions. Event sources frequently exhibit variable arrival rates and diverse data distribu-tions, which in turn lead to highly fluctuating query selectivities. During periods of elevated load, the number of events to be pro-cessed may exceed the system’s computational capacity, rendering exhaustive query evaluation impractical or even infeasible. In such situations, maintaining responsiveness requires the system to apply load-shedding techniques. Rather than striving for all the results at the cost of unbounded delays, the system performs best-effort query evaluation that processes a subset of the intermediate com-putational results, aiming to maximize the quality of the reported results while adhering to strict latency constraints. This trade-off is fundamental in stream processing, where timely insights are often more critical than full completeness [8, 9]. 2 Complex Event Processing Complex Event Processing (CEP) is a prominent technology for robust and high-performance real-time detection of arbitrarily com-plex patterns in massive data streams[4]. It is widely employed in many areas where extremely large amounts of streaming data are continuously generated and need to be promptly and efficiently analyzed on-the-fly. Online finance, network security monitoring, credit card fraud detection, sensor networks, traffic monitoring, healthcare industry, and IoT applications are among the many ex-amples. Complex Event Processing (CEP) has become integral to mod-ern data-driven environments because it allows organizations to identify complex temporal or semantic patterns in streaming data in real time [4]. Beyond just financial trading and fraud detection, CEP frameworks like FlinkCEP enable engineers to specify event patterns and react immediately when patterns are recognized [2]. Industry guides and glossaries emphasize that CEP matches incom-ing events against sophisticated patterns to recognize relationships and trigger real-time responses [5, 7]. Practical guides also highlight the architectural implications of CEP, noting that robust implemen-tations must handle streaming data at scale, maintain low latency, and support intelligent event sourcing [7]. In short, CEP systems are not only crucial for monitoring and security but also form. the backbone of modern applications that must react swiftly to complex sequences of events across many sectors [5]. The patterns detected by CEP engines are often of exceedingly high complexity and nesting level, and may include multiple com-plex operators and conditions on the data items. Moreover, these systems are typically required to operate under tight constraints on response time and detection precision, and to process multiple patterns and streams in parallel. Therefore, advanced algorithmic solutions and sophisticated optimizations must be utilized by CEP implementations to achieve an acceptable level of service quality. The Figure 1 above presents an overview of OpenCEP structure[6]. Incoming data streams are analyzed on-the-fly and useful statistics and data characteristics are extracted to facilitate the optimizer in applying the aforementioned optimization techniques and maxi-mize the performance of the evaluation mechanism – a component in charge of the actual pattern matching. By incorporating a multitude of state-of-the-art methods and algorithms for scalable event processing, OpenCEP can adequately satisfy the requirements of modern event-driven domains and out-perform. existing alternatives, both in terms of the actual perfor-mance and the provided functionality. OpenCEP features a generic and intuitive API, making it eas-ily applicable to any domain where event-based streaming data is present. 3 Data Set: Citibike The Citi Bike system dataset [3] provides a comprehensive record of bicycle-sharing trips in New York City, capturing millions of rides each month. The dataset is released as monthly compressed CSV files, with one row per trip, enabling fine-grained analysis of user mobility patterns. Each record includes the ride identifier, bi-cycle type (classic or electric), start and end timestamps, origin and destination stations (with names, identifiers, and geographic coordi-nates), and rider classification as either a subscriber (member) or oc-casional (casual) customer. When monthly data exceed one million trips, the dataset is split across multiple files within a single archive Fig. 1: Working Flow of a CEP System to maintain accessibility. These attributes make the dataset partic-ularly suitable for studying urban mobility, transportation demand forecasting, and the design of sustainable transportation systems. 3.1 Attributes and Format Each trip record is structured with a consistent set of attributes: Ride ID (a unique trip identifier), Rideable type (e.g., classic or electric bike), Started at and Ended at timestamps, Start station name and Start station ID, End station name and End station ID, Start latitude and Start longitude, End latitude and End longitude, and finally, a categorical label indicating whether the trip was made by a member (annual subscriber) or a casual rider. This schema balances the need for detailed trip-level information with rider privacy, while still allowing researchers to study spatiotemporal demand dynamics, trip flows, and user behaviors. 4 Project Requirements and Problem Definition This course project requires students to implement a basic load shed-ding strategy for a CEP system to handle bursty workloads while ensuring low-latency processing based on codebase OpenCEP[6]. Students will work with the Citi Bike dataset and evaluate the following query. Query: detect hot paths. ‘Hot paths’ consist of stations where bicycles are accumulated faster than at other stations. They indicate the trend of movement for the bike fleet. Since bikes quickly accu-mulate in certain areas, the operator moves more than six thousand bicycles per day among stations. Hence, the detection of ‘hot paths’ of bike trips promises to improve operational efficiency. PATTERN SEQ (BikeTrip+ a[], BikeTrip b) WHERE a[i+1].bike = a[i].bike AND b.end in {7,8,9} AND a[last].bike = b.bike AND a[i+1].start = a[i].end WITHIN 1h RETURN (a[1].start, a[i].end, b.end) Listing 1: Citi Bike sequence pattern Listing 1 shows a pattern detection query to detect such ‘hot paths’, using the SASE query language [1]: Within an hour time window, a bike is used in several subsequent trips, ending at particular stations, No. 7, No.8, and No.9 (7,8,9). Here, the Kleene closure operator detects arbitrary lengths of paths. Project Requirement. Students are required to implement the state management of partial matches for the above query, including load shedding strategies to handle bursty workloads while ensuring low-latency processing. The implementation should be based on the OpenCEP codebase [6]. The measure of success will be the ability to process incoming events with (1) low latency, even under high load conditions, while (2) maintaining the recall of pattern detection. 4.1 Definitions We summarize key Complex Event Processing (CEP) concepts used in this project and needed to reason about the query in Listing 1 and the required load shedding logic: Primitive (raw) event A single input record arriving on a stream (e.g., one bike trip). It has a unique timestamp and a set of typed attributes. Attribute Named field inside an event (e.g., start station id, end station id, start time). Event time The timestamp embedded in the event payload de-scribing when the underlying real-world action occurred. Processing time The wall-clock time at which the CEP engine observes/processes the event. Event time and processing time may differ under delay or disorder. Complex event / Match A higher-level event produced when a set (typically an ordered sequence) of primitive events satis-fies a pattern. Pattern A declarative specification (here SASE) describing struc-tural (ordering, repetition), temporal (window), and predicate constraints over events. Sequence (SEQ) An operator requiring a temporal / positional order of constituent events. Kleene closure (+) Operator allowing one-or-more repetitions of a sub-pattern, creating potentially unbounded numbers of partial matches. Partial match / Partial state Data structure holding the events selected so far for a pattern instance that is not yet complete. Managing the population of partial matches dominates mem-ory and CPU cost. State store Logical repository (in-memory tables, lists, indexes) the engine uses to retain partial matches and auxiliary meta-data during evaluation. Correlation predicate A condition relating attributes across dif-ferent events in a partial match (e.g., same bike id, station chaining a[i+1].start = a[i].end). Selection predicate / Filter A local condition on a single event (e.g., b.end in 7,8,9). Window (WITHIN T) Temporal constraint limiting the maximal span (end time minus first event time) of a match; enables pruning of stale partial matches. Selectivity Fraction of incoming events (or event combinations) that survive predicates or advance partial matches; impacts growth of state. Load shedding Intentional dropping or early termination of pro-cessing for some events or partial matches under resource pressure to keep latency bounded. Shedding unit The granularity at which the system discards work: event-level (skip ingest), partial-match-level (prune oldest / lowest-utility states), or predicate-level (relax a constraint temporarily). Utility / Score Heuristic value estimating the expected contribu-tion of a partial match/event toward producing a future complete match (e.g., based on remaining window time, path length so far, station popularity). Latency Time between arrival (or event time) of the last contribut-ing primitive event and emission of the complex event result. Throughput Number of primitive events processed per second. Recall Fraction of true pattern matches still reported after any shedding (measures correctness under best-effort mode). Pruning Safe elimination of partial matches proven impossible to complete (e.g., window expired, violated chaining constraint) without loss of recall. Note this is distinct from load shedding. Expiration Event or partial match removal triggered by window boundaries or watermarks. Policy examples to shed (i) Probabilistic sampling of new partial matchs/events, (ii) Lowest-utility partial matchs/events drop, etc. For this project, focus on: (1) measure the overload detection latency and throughput under bursty workloads, (2) implement a load shed-ding strategy that balances latency and recall under bursts (e.g., prioritizing longer chains nearing completion at target stations 7,8,9 within 1h), and (3) evaluate the effectiveness of the shedding strategy in maintaining low latency while maximizing recall. 5 Submission Requirements Students must implement a system to solve the problems described in Section 4. Each student is required to submit a link to the system’s GitHub repository (ensure the repository is set to public visibil-ity) along with a 4 page report detailing the system. The detailed requirements for the project report are as follows: 5.1 Project Report Requirements Template. Use the ACM Proceeding Template: LATEX or Word. Report Organization. The report should be 4 pages and include the following sections. Students may use their own section titles, but the content should align with the following structure: (1) Abstract. (2) Introduction. Provide a brief background and overview of the project. (3) System Architecture. Describe the design choices and mo-tivations in detail. (4) Implementation. Describe how the system was imple-mented, including the algorithmic design, key data struc-tures, and the estimated time and space complexity of the core components. (5) Performance Evaluation. Evaluate the system’s per-formance. Measure recall under latency bounds set to 10%, 30%, 50%, 70%, 90% of the original latency without load shedding. As an advanced requirement, assess scalabil-ity—how performance varies with resources (e.g., CPU cores) and workloads (e.g., events per second)—and report resource utilization (e.g., CPU and GPU usage). (6) Conclusion. References [1] Jagrati Agrawal, Yanlei Diao, Daniel Gyllstrom, and Neil Immerman. 2008. Efficient pattern matching over event streams. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. 147–160. [2] Apache Software Foundation. 2025. FlinkCEP-Complex event processing for Flink. https://nightlies.apache.org/flink/flink-docs-master/docs/libs/cep/ Documenta-tion for an unreleased version of Apache Flink. [3] Citi Bike. 2025. Citi Bike System Data. https://citibikenyc.com/system-data. [4] Gianpaolo Cugola and Alessandro Margara. 2012. Processing flows of information: From data stream to complex event processing. ACM Computing Surveys (CSUR) 44, 3 (2012), 1–62. [5] Databricks. 2024. What is Complex Event Processing [CEP]? https://www.databricks. com/glossary/complex-event-processing [6] Ilya Kolchinsky. 2025. OpenCEP: Complex Event Processing Engine. https://github.com/ilya-kolchinsky/OpenCEP. Accessed: 2025-08-25. [7] Redpanda Data. 2024. Complex event processing—Architecture and other practi-cal considerations. https://www.redpanda.com/guides/event-stream-processing-complex-event-processing [8] Cong Yu, Tuo Shi, Matthias Weidlich, and Bo Zhao. 2025. SHARP: Shared State Reduction for Efficient Matching of Sequential Patterns. arXiv preprint arXiv:2507.04872 (2025). [9] Bo Zhao, Nguyen Quoc Viet Hung, and Matthias Weidlich. 2020. Load shedding for complex event processing: Input-based and state-based techniques. In 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 1093–1104.

$25.00 View

[SOLVED] STAT 311 Writeup Assignment 0

Writeup Assignment 0 Please complete the following: Address any questions or code below. Compile the document into a PDF file. The PDF file must be multiple pages- if your file is a single page, try compiling to an HTML, opening in your browser, and printing the page to a PDF. Submit to Gradescope. Paginate individual questions correctly, selecting which pages each question pertains to. or your assignment will not be graded and will require resubmission. HW0 Programming Assignment For writeup assignments that build upon the programming assignments, you will need to execute some of the homework code within the RMarkdown file. Copy all of your code from Assignment O into the block below. #Paste your entire HWO Programming assignment code here Including Graphics When importing data for writeup assignments you are free to change code that loads data, such as changingl the file name or folder address. The code below imports a dataset about the number of rental bikes in usel each hour for a random set of hours over a two year span. The data is saved in the data frame. bikes, underl the variable name bikes$rentals. Using the function hist(...), create a histogram of the bike rentals. Utilizing ?hist, find the optional function arguments to do the following: Plot a histogram of the data Plot a density, rather than frequency histogram Change the main title of the plot to something more appropriate Change the x axis of the plot to something more informative bikes

$25.00 View

[SOLVED] STAT 311 Programming Assignment 0

Programming Assignment 0 STAT 311 Please complete the following problems and submit a file named STAT311-HWO.R to Gradescope. A total of 6 out of 7 points should be immediately visible. 1 point is hidden that tests your function. Remember: Start from the provided skeleton code file. Do not rename provided data files or edit them in any way. Do not use global paths in your script. to read in data. Instead, use setwd() interactively in the console to set your working directory. The submitted R file should only read data in by file name only, no folders in the path name. Do not destroy or overwrite any requested variables in vour program. Thev are all required for the autograder to run. Check to make sure you do not have any syntax errors. Reset the working environment and rerun your entire assignment to ensure it runs without errors Make sure your submission is named STAT311-HWO.R Part 1 Create a vector called myVector of length 5 with values [1, 2, 3, 4, 7] Part 2 Create a string called myString and have it store your name. Part 3 Read the "Rule #1- Resolving Gradescope Submission Issues" post on EdStem and find the hidden value for HWO. Save it in a variable called HWOP3. Part 4 The provided code does not run properly as an R script. Fix it without editing the values or variable names. Part 5 Create a function that takes a single variable (a 3x3 matrix or dataframe) and returns a vector equal to the sum of the rows of the input matrix or dataframe. Call it myFunction, which you can test on the Dataset SampleData.csv which is saved in the variable sampleData1.

$25.00 View

[SOLVED] Parallel computing with GPUs for fast ultrasonic imaging

Parallel computing with GPUs for fast ultrasonic imaging Project Type: Individual Project Description Parallel computing lends itself well for fast processing of ultrasonic signals enabling high frame rate imaging in fast dynamic processes. Examples include ultrasonic imaging during additive manufacturing or medical ultrasonic imaging of living organisms. GPUs are currently being used to facilitate parallel computing, however the programming architecture must adhere to certain coding rules for benefits to be apparent. The aim of this project is to develop and use a GPU function in order to accelerate the processing of ultrasonic signals for ultrasonic imaging. An ultrasonic sensor transmits a signal which travels within a material and is received by another sen-sor. An ultrasonic phased array is an array of such transmit-receive sensors and the ultrasonic signals are sequences of time vs. ultrasonic amplitude from all possible transmit-receive pair combinations. The signals are then processed to create an ultrasonic image.The processing algorithm that willbe  followed is called the Total Focusing Method (TFM), which is the superposition of all received signals with an appropriate time delay corresponding to each point (pixel) of the ultrasonic image. The GPU function will be able to process the TFM algorithm and demonstrate the advantage of parallel com-puting compared to the default, linear computing performed on the CPU (i.e.computing on a single thread). All ultrasonic data sets willbe provided.The project will address the high frame rate requirements of 2D ultrasonic imaging,from data provided by a 1D ultrasonic phased array.The GPU algorithm may be expanded to the 3D ultrasonic imaging case. The project requires a certain degree of programming skills (Python or Matlab) Key Objectives · Familiarise with the concept of the TFM algorithm and the various methods of signal processing in ultrasonic phased arrays. · Familiarise with the use of GPU for parallel computing. · Develop a signal processing function that uses GPU for 2D ultrasonic imaging from pre-recorded ultrasonic data. · Compare the processing time between parallel (GPU) and linear (CPU) programming  functions. · Identify bottlenecks and ways to improve the speed of parallel programming for the TFM imaging algorithm. · Expand the GPU based signal processing to the 3D ultrasonic imaging case.

$25.00 View

[SOLVED] Statistics 7 Fall - 2025 Prolog

Statistics 7 Fall - 2025 I. Course Description * Introduce basic probability as well as inferential statistics including confidence intervals and hypothesis testing on means and proportions, t-distribution, Chi Square, regression and correlation. F-distribution and nonparametric statistics included if time permits. II. Student Learning Outcomes* By the end of this course, you should be able to analyze and present data, design observational and experimental studies, use probabilities to model and predict random events, and use inference procedures to test hypotheses and estimate population parameters to reach conclusions in context. What you learn in this class should help you understand broadly the methodology, results, and issues of studies presented in your other classes or in news stories. I also hope that you will come to appreciate statistics as a cool and really interesting subject! III. How to Succeed in the Course To succeed in this course, please come to class and discussion prepared and on time, ask questions when you need help, attend office hours, and actively participate in in-class discussions. For quizzes, make sure you work through the material for the quizzes, but also work toward mastering it for the in person final. IV. Required Materials * Mind On Statistics, 6th edition with Webassign by Jessica Utts and Robert Heckard. You need the Webassign homework access. V. Course Technology Requirements* We are using free statistical software called R. You can download these onto your own computer (PC and Mac). Instruction will be provided in class and/or in discussion sections. The programs are available in the ICS computer labs as well, but to use them there you need to set up an account with ICS. Instructions for doing so are provided at this link: http://www.ics.uci.edu/~lab/students/acct_activate.php. VI. Communication Expectations* Please contact the instructor via email VII. Assignment Details * ●   Topic Quizzes – These short assessments are meant to be completed after reading the corresponding chapters and optional video material. They primarily focus on key terminology and conceptual understanding. ●    Lecture assignments – These assignments are designed to reinforce and expand on the material introduced in the videos. They typically involve more detailed calculations and applied problem-solving. ●    Homework – These assignments are an opportunity to see how the various concepts connect and to develop a deeper understanding of the overall material. ●   Course Quizzes – Quizzes are assigned approximately every other week and are intended to help students practice and master core concepts. They serve as checkpoints for your progress and preparation for the cumulative final exam. ●    Final Exam – The final is comprehensive and may include any topic covered throughout the course. Students should review all prior materials and aim to synthesize their learning into a strong overall understanding. Late and Incomplete Assignments A one time extension for assignments is allotted for students who need an extension for topic quizzes, lecture assignments, or homework assignments. There are no extensions for course quiz assignments.

$25.00 View

[SOLVED] BAFI1029 Derivatives and Risk Management Assessment 3

BAFI1029 Derivatives and Risk Management Assessment 3 - Individual Risk Management Report (50%) Report and Excel Due Date: Week 13 - Friday, 10th October 2025 by 23:59 (Singapore Time) Assessment Task This is an individual task. In this assessment, students are required to form one equity portfolio, evaluate  their  risks  and  provide   solutions  to  manage  the  risk.  The  goal  of  this  individual assignment is to gain a better understanding of the portfolio investment (in the US stock market) and risk management process. Below are the tasks: ●    to build one equity investment portfolio and justify stock selection ●    to hold the portfolio from Monday, 1st September 2025 (beginning of Week 8), to Friday 19th September 2025 (end of Week 10) and observe its changes ●    to identify the portfolio risk by reporting portfolio’s VaR ●    to provide suggestions for managing the risk ●    to communicate your investment and risk management process using a professional report Portfolio Creation Please follow the following steps to build one portfolio. 1.    Create an account (with your real first & surname) onwww.marketwatch.com 2.    Create a watchlist of one Portfolio based on the close price as of Monday, 1st September 2025 Note: The specified date here is used to start the observation period of your portfolios, not the date on which you must perform the task. For example, you can create portfolios either on Monday, 1st September 2025, or on dates such as 10th  September 2025 or 1st  October 2025, but you will still observe the price change between the sample period Monday, 1st September 2025 to Friday, 21st September 2025. 3.    This watchlist of Portfolio consists of Four stocks: a.    Choose any Three stocks from Table 1 (on page 5) plus Tesla Inc. (TSLA) b.   For Tesla stock, the number of Tesla shares must equal “the last three digits of your student ID”. For example, if your student I.D. is S3612345, you would hold 345 shares of Tesla in your portfolio. c.    Determine the weights and shares for the rest ofthe stocks you chose in step a. d.   You have USD 1 million for this Portfolio. Note: Since the shares can’t be bought in fraction, a tiny variation from the specified budget is acceptable. You can choose to hold some Cash if you believe the investment opportunity is not good  enough,  but  you  will  need  to  justify  this  decision  in  your  report.  The  total  $1  million investment you have is based on the share prices on Monday, 1st September 2025. 4.    Take screenshots of your portfolio and the necessary information in all sections. Make sure you attach them in the Appendix of your submitted report. 5.    Suppose this is a Buy-and-Hold strategy, therefore, do not change your portfolio setting  during  your  holding  period Monday, 1st September 2025 to Friday 21st September 2025. Questions and Marking Guide: Your report must include the following sections: 1. Trading philosophy: (2 marks) Give an overview of your philosophy to form the portfolio. You should identify yourself as a value or growth investor or a mixture of both. Provide brief definitions for value/growth investing. 2. Portfolio construction: (6 marks) Present your initial portfolio, including information on why you have invested in the stocks in your initial portfolio (three stock selection for Portfolio). a.   The overall market and macroeconomic condition (3 marks) b.   Industry consideration and/or diversification, specific stock’s strengths/positive prospects (3 marks) 3. Risk identification: (22 marks) In this part, you should discuss the risk profile of your portfolio. On Monday, 1st September 2025, calculate the VaR of your Portfolio using 2-year daily historical stock price between 28th August 2023 (inclusive) and 29th August 2025.  The discussion should include the following points: a.   Calculation and discussion of the one-day 99% Value at Risk of each stock in your portfolio using historical simulation approach. That means, if you have four stocks in total, you need VaR for each. Show key steps of workings. (4 marks) b.   Calculation and discussion of the 10-day 95% Value at Risk of your portfolio using historical simulation approach. Show key steps of workings. (4 marks) c.   Calculation and discussion of the 10-day 95%-Value at Risk of your portfolio using a model-building approach. Show key steps of workings. (4 marks) d.   Discussion  of  the  performance  of  VaR  in  (b)  and  (c),  by  comparing  your calculated  VaR  results,  and  the  portfolios’  actual   10-day  returns  from 1st September 2025 to 12th September 2025. (6 marks) e.   Calculation and discussion of the one-day 99%-Expected Shortfall (CVaR) of your  portfolio  using   a  historical   simulation  approach.   Show  key   steps  of workings. (4 marks) Note: VaR template can be found in Week 11’s material on Canvas. You can download historical stock prices from MarketWatch as a CSV file, but it limits the maximum data to one year at a time, so you'll need to download multiple times for longer periods. As an illustration, this is the link to the historical daily prices of the Tesla stock. You're also welcome to use Yahoo Finance to obtain historical data, but  be  aware that  it will only appear as a screenshot, not as a CSV download, without a Gold subscription. 4. Hedging using Options: (10 marks) Suppose you hold the portfolio until the submission date, on any day between Monday, 1st September 2025 and the submission date - 10th October 2025, you will use the option contract to hedge any one of your three selected stocks (excluded Tesla) in your Portfolio. (Please take the screenshot of option quote and spot price as of the same day and attach them in the Appendix of your submitted report). a.   You need to determine and explain which option you want to use (i.e., specify whether it is a call or put, the transaction date, when the expiration date is, appropriate  strike  price,  whether  you  should  go  long  or  short,  number  of contracts, etc.). Provide justification for your decision. (6 marks) b. Discuss when you will exercise your option and its potential payoff. (2 marks) c. To  further  manage  your  portfolio  risk,  you  decide  to  explore  combining  the protective put with a covered call, where you write a call option on the same stock  you  are  hedging.  Describe  how  combining  your  protective  put  with  a covered  call  creates  a  more  complete  hedge  (i.e.,  a  collar).  In  your  answer: Specify the strike price and expiry of the call option you will sell and justify your choice. Explain how this addition changes the overall payoff of your position. Compare  the  trade-off between  the  reduced  cost  of hedging  and  the  capped upside. Support your explanation with a payoff diagram or table. (2 marks) Note: The budget for option transactions (option price per share) is limited to within 1.5% of the stock’s market price and is not included in the initial $1 million budget (for either call or put options). You can construct the option trading strategy anytime during the portfolio holding period. 5. Hedging using Swaps: (10 marks) As an Australian-based investor, you want to borrow 1 million U.S. dollars at a fixed interest rate to match your investment  cash  flows.  To  achieve  this,  you  enter  into  a  two-year currency swap agreement with Mrs. Phoebe Phan, who wishes to borrow Australian dollars at a floating interest rate. The amounts required by both parties are roughly the same at the current exchange rate. You and Mrs. Phoebe Phan have been quoted the following interest rates, which have been adjusted for the impact of taxes: Design a swap that will net a bank (Bank A), acting as an intermediary, 20 basis points per annum. Unlike a swap equally attractive to both parties, this task requires you to design a swap that allocates 60% of the advantage (i.e., gain) to you and 40% of the advantage (i.e., gain) to Mrs. Phoebe Phan. Determine the rates of interest that you and Mrs. Phoebe Phan will end up paying. Provide an explanation, list your calculation process, and use a figure to illustrate the swap structure. Total=50 marks Note: ●      To complete tasks 1-4, you are required to use/download relevant historical stock price data. For task 5 (Hedging using swap), please use the information given only. No additional data is needed. ●      Besides  the  working  steps/summary  of key  results  of your  calculations  should  be discussed in the report, you also need to submit a separate Excel file to Canvas to show your detailed calculations. ●      This  instruction  includes  suggestions  on  items  to  include  in  the  report,  more information for parts you think are important may be included as you feel necessary, keeping in mind the word limit. ●      The  teaching  team  is  not  supposed  to  comment  on  your  calculation  workings  or identify your calculation mistakes. The teaching team will provide guidance to make sure that you are on the right track. However, it is still your responsibility to investigate your work and identify the errors. Submission •   All submissions must be made electronically on Canvas, accompanied by a cover sheet  through  Canvas  =>  Assignments  =>  “Assessment 3: Individual Risk Management Report”. •   The report should follow a structured format, starting with an executive summary and followed by sub-sections addressing all questions/tasks. Essential components of the report include page numbering,  sections numbering, main body, executive summary, reference list, introduction and conclusion. •   The report should be no longer than 2500 words (-/+  15%), excluding executive summary,   references   and    appendix.   The    student   can   have   up   to    2-page appendix. Citation and reference must be provided. The Excel file contains your workings to support the reported analysis. •   The submission must be using 1 or 1.5 spacing and 12-point Times New Roman font. •    Students must  ensure  their reports  are  free  from  academic  issues  like  copying, plagiarism,   sharing   work,    collusion,   and    collaboration   with   other    groups, maintaining a similarity rate below 30%. Academic misconduct can result in course failure, permanent academic records, and graduation delays due to the investigation time by the COBL Integrity office. •   Students are required to keep back-ups ofall submitted work just in case any are lost. Table 1 List of stocks BRK.BBerkshire Hathaway Inc.2NVONovo Nordisk A/S3JPMJPMorgan Chase &Co4VVisaInc.5XOMExxon Mobil Corporation6MAMastercard Incorporated7PGThe Procter & Gamble Company8JNJJohnson & Johnson9HDThe Home Depot, Inc.10TMToyota Motor Corporation11BACBankofAmericaCorporation12CRMSalesforce, Inc.13WFCWells Fargo &Company14DISThe Walt Disney Company15MCDMcDonald's Corporation16CSCOCiscoSystems, Inc.17GEGE Aerospace18BABAAlibaba Group Holding Limit

$25.00 View

[SOLVED] ISE 580 Fall 2025 Homework 3 Web

ISE 580 Fall 2025 Homework #3 Show all your work step by step. Your grade depends on the clarity of solutions and the accuracy of answers. 1.   [60 points] Hungry’s Fine Fast Foods is interested in looking at their staffing for the lunch rush, running from 10 am to 2 pm (4 hours). People arrive as walk-ins, by car, or on a (roughly) scheduled bus, as follows: •     Walk-ins—one at a time, interarrivals are exponential with mean 3 minutes; the first walk-in occurs EXPO(3) minutes past 10 am. •     By  car—with  one,  two,   three,   or  four  people  to  a   car  with  respective probabilities 0.2, 0.3, 0.3, and 0.2 (DISC(0.2,  1,  0.5, 2, 0.8, 3,  1, 4)); interarrivals distributed as exponential with mean 5 minutes; the first car arrives EXPO(5) minutes past 10 am. •     A single bus arrives every day sometime between 11 am and 1 pm (arrival time distributed uniformly over this period). The number of people on the bus varies from day to day, but it appears to follow a Poisson distribution with a mean of 30 people. Once people arrive, either alone or in a group from any source, they operate independently regardless of their source. The first stop is with one of the servers at the order/payment counter, where ordering takes TRIA(1, 2, 4) minutes and payment then takes TRIA(1, 2, 3) minutes; these two operations are sequential, first order-taking then payment, by the same server for a given customer. The next stop is to pick up the food ordered, which takes an amount of time distributed uniformly between 30 seconds and 2 minutes. Then each customer goes to the dining room, which has 30 seats (people are willing to sit anywhere, not necessarily with their group), and partakes of the sublime victuals, taking an enjoyable TRIA(11, 20, 31) minutes. After that, the customer walks fulfilled to the door and leaves. Queueing at each of the three “service” stations (order/ pay, pickup food, and dining room) is allowed, with FIFO discipline. There is a travel time of EXPO(30) seconds from each station to all but the exit door—entry to order/pay, order/pay to pickup food, and pickup food to dining. After eating, people move more slowly, so the travel time from the dining room to the exit is EXPO(1) minute. The servers at both order/pay and pickup food have a single break that they “share” on a rotating basis. More specifically, at 10:50, 11:50, 12:50, and 1:50, one server from each station goes on a 10-minute break; if the person due to go on break at a station is busy at break time, he or she finishes serving the customer but still has to be back at the top of the hour (so the break could be a little shorter than 10 minutes). You can use the images below for the current staff schedule. Order and Payment Servers Schedule Pickup Food Servers Schedule Staffing is the main issue facing Hungry’s. Currently, there are six servers at the order/pay station and two at the pickup food station throughout the 4-hour period. Since they know that the bus arrives sometime during the middle 2 hours, they’re considering a variable staffing plan that, for the first and last hour would have three at order/pay and one at pickup food, and for the middle 2 hours would have nine at order/pay and three at pickup food (note that the total number of person-hours on the payroll is the same, 32, under either the current staffing plan or the alternate plan, so the payroll cost is the same). What’s your advice? (You can use the images below for this staff schedule.) Order and Payment Servers Schedule Pickup Food Servers Schedule In terms of output, observe the average and maximum length of each queue, the average and maximum time in each queue, and the total number of customers completely served and out the door. Make plots ofthe queues to get into order/pay, pickup food, and the dining room. 2.   [40 points] Patients arrive to a 24-hour, 7-days-a-week outpatient clinic with interarrival times being distributed as exponential with mean 5.95 (all times are in minutes); the first patient arrives at time 0. The clinic has five different stations (like nodes in a network), where patients might be directed for service before leaving. All patients first sign in with a single receptionist; sign-in times have a triangular distribution with parameters 1, 4, and 8. From there, they might go to the nurses’ station (probability 0.9888) orto one of three exam rooms (probability 0.0112). The table below gives the five stations, service times at those stations, and transition probabilities out of each station into the next station for a patient (including out of the sign-in station, just described as an example): All patients eventually go through the checkout station and go home. Note that it is possible that, after a visit to an exam room, a patient is directed to an exam room again (but may have to queue for it). After a patient checks in but is queueing for either the nurses’ station or an exam room, regard that patient as being in the waiting room (and those patients leaving an exam room but again directed to an exam room are also regarded as being in the waiting room). There are three identical exam rooms, but only one resource unit at each of the other four stations. Queues for each station are first-in, first-out, and we assume that movement time between  stations is negligible. Run a  simulation  of 30  round-the-clock  24-hour days and observe the average total time in system of patients, the average number of patients present in the clinic, as well as the throughput of the clinic (number of patients who leave the clinic and go home over the 30 days); also make a plot of the number of patients present in the clinic. If you could afford to add resources, where is the need most pressing?

$25.00 View

[SOLVED] COMP9517 Computer Vision 2025 T3 Lab 2

COMP9517: Computer Vision 2025 T3 Lab 2 Specification Maximum Marks Achievable: 2.5 This lab is worth 2.5% of the total course marks. The lab files should be submitted online. Instructions for submission will be posted closer to the deadline. Deadline for submission is Week 4, Friday 10 October 2025, 18:00:00 AET. Objective: This lab revisits important concepts covered in the Week 3 lectures and aims to make you familiar with implementing specific algorithms. Software: You are required to use OpenCV 3+ with Python 3+ and submit your code as a Jupyter notebook (see coding and submission requirements below). In the tutor consultation session this week, you can ask any questions you may have about this lab. Materials: The image pair to be used in this lab is available via WebCMS3 and you are also asked to capture one pair of pictures yourself. For the latter, use your smartphone or digital camera to take the pictures required for the tasks below. Submission: All code and requested results are assessable after the lab. Submit your source code as a Jupyter notebook (.ipynb file) that includes all output and answers to all questions (see coding requirements at the end of this document) by the above deadline. The submission link will appear on WebCMS3 in due time. Preparation: For the pictures to be taken by yourself, choose any scene with clear structures (e.g. buildings) on campus or somewhere in your neighborhood and take two pictures of it. The two pictures should have some amount of overlap but neither of them should capture the whole scene. Below is an example pair of pictures (not to be used in your own results). To save disk space and computation time, you may downscale your pictures (e.g. to 1,000 pixels wide) before carrying out the tasks below. Complete Tasks 1–3 using the image pair provided on WebCMS3 for this lab. Then repeat Task 3 for your own picture pair to see if your code generalizes easily to new images. Example Picture 1 Example Picture 2 Task 1 (0.5 mark) Compute the SIFT features of the two pictures. a)   Extract the SIFT features with default parameters and show the keypoints on the pictures. Hint: Use existing library functions for this (see suggestions at the end). b)   To achieve better visualization of the keypoints, reduce their number to include only the ~20 most prominent ones. Hint: Vary the parameter contrastThreshold or nfeatures. Show the results obtained in a) and b) in your Jupyter notebook (like the examples below) and include a brief description of the approach you used for b). Task 2 (1 mark) Recompute the SIFT features for the following processed versions of the two pictures: a)   Scaled with a factor of 120 percent. b)   Rotated clockwise by 60 degrees. c)   Contaminated with salt and pepper noise. Hint: The scikit-image library has a utility function to add random noise of various types to images. For each of these three versions of the  pictures, extract the SIFT features  and show the keypoints on the processed pictures using the same parameter settings as for Task 1 (for the reduced number of keypoints). Inspect the keypoints visually: Are the keypoints of the processed pictures roughly the same as those of the originals? What does this say about the robustness of SIFT in each case? To which of the three types of processing is SIFT most robust? Show the results obtained for each of a), b), and c) in your Jupyter notebook and include your answers to the questions stated above. Task 3 (1 mark) Match and stitch the two pictures to create a single composite picture. a)   Find the keypoint correspondences between the pictures and draw them. Hints: First, use OpenCV’s brute-force descriptor matcher (BFMatcher) to find matching keypoints. Then, use its kNN-based matching method (knnMatch) to extract the k nearest neighbours for each query keypoint. Use your own criteria based on the keypoint distances to select the best keypoint correspondences between the two pictures. b)   Use the RANSAC algorithm to robustly estimate the mapping of one of the two pictures to the other based on the selected best keypoint correspondences and then apply the mapping and show the final stitched picture. Hints: There are existing OpenCV functions to find the mapping (findHomography) between sets of points using various methods, as well as functions to apply this mapping to sets of points (perspectiveTransform) and to warp pictures accordingly (warpPerspective). You may need to crop the result to get a nicely stitched picture. The red line shown in the example below indicates the stitching boundary, but it is not necessary to draw the boundary in your result. Coding Requirements A general goal in computer vision is to develop methods that work for a wide range of images. So here, too, the goal is to write code that works for both image pairs (the one we provided and your own), ideally using the same parameter values, not requiring careful manual tuning of the parameters for each image pair separately to get good results. If your code does require some tuning, include clear comments explaining this in your code. Make sure that in your Jupyter notebook, the input pictures are readable from the location specified as an argument, and all outputs and other requested results are displayed in the notebook environment. All cells in your notebook should have been executed so that the tutor/marker does not have to execute the notebook again to see the results. Coding Suggestions Check the OpenCV documentation for various built-in functions to find SIFT features, draw keypoints, and match keypoints in images, as well as apply RANSAC to estimate a mapping function. You should understand how the algorithms work, what parameters you can set in these built-in functions, and how these parameters affect the output. For your reference, below are links to relevant OpenCV functions. 2D Features Framework https://docs.opencv.org/4.6.0/da/d9b/group__features2d.html Drawing Functions of Keypoints and Matches https://docs.opencv.org/4.6.0/d4/d5d/group__features2d__draw.html Descriptor Matchers https://docs.opencv.org/4.6.0/d8/d9b/group__features2d__match.html OpenCV SIFT Class Reference https://docs.opencv.org/4.6.0/d7/d60/classcv_1_1SIFT.html See the following page to understand image features and various feature detectors: https://docs.opencv.org/4.6.0/db/d27/tutorial_py_table_of_contents_feature2d.html Also, see the following example of computing SIFT features and showing the keypoints: https://docs.opencv.org/4.6.0/da/df5/tutorial_py_sift_intro.html And finally see this page for an example of feature matching: https://docs.opencv.org/4.6.0/dc/dc3/tutorial_py_matcher.html Reference: D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, November 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94 Copyright: UNSW CSE  COMP9517 Team.  Reproducing,  publishing,  posting,  distributing, or translating this lab assignment is an infringement of copyright and will be referred to UNSW Student Conduct and Integrity for action. Released: 29 September 2025

$25.00 View

[SOLVED] CMPM 169 - Creative Coding Fall 2025

CMPM 169 - Creative Coding Tu|Th 9:50AM - 11:25AM; Phys Sciences 114 Fall 2025, 12412 COURSE INFORMATION Surveys seminal and contemporary artworks and interactive installations that utilize and critically analyze new media, new technologies, and new algorithms. Students introduced to creative coding practices and encouraged to emulate existing digital arts techniques and to develop their own computational arts projects PREREQUISITES/COREQUISITES CMPM 35 or CMPM 120 or CMPM 163 or by permission of instructor. Enrollment is restricted to juniors, seniors, and graduate students. REQUIRED MATERIALS, TEXTBOOKS AND TECHNOLOGY A laptop able to run Unity3D (https://docs.unity3d.com/Manual/system-requirements.html) and Touchdesigner (https://derivative.ca/UserGuide/System_Requirements)  is highly recommended. If you do not have access to a device you can use to work on the materials in class, you will have permission to work from a lab or from your home on Lab days. ASSIGNMENTS There will be 7 short assignments and one final project for this class. For the 7 short assignments, the lowest grade will be dropped. You are encouraged to do all seven, as the grading for some of these will be difficult. FINAL PROJECT The final project will be worth 40% of your final grade, split into 4 sections: Ideation, Pitch, Report, and Showcase. ·  Ideation - Split across two submissions, you will come up with a list of ideas for a final project in the first weeks, then do a deeper dive on a subset of those ideas in later weeks. ·  Pitch - From your ideation stage, pick an idea to complete. You'll create a short description of your project, a rough plan on executing it, and what resources you will need to complete it. ·  Report - A mid-project check in to see your progress and a report on what still needs to be done and what roadblocks you have ·  Showcase - A final showcase of your work completed, done in the finals week of class.

$25.00 View

[SOLVED] ENG 107 Fall 2025 -MLA Workshop

ENG 107: Fall 2025 -MLA Workshop There are three parts to the MLA activity.  We will do them one by one. They will be posted separately. READ THE LESSON PLAN FIRST! A. MLA A Creation DOC - Due Sunday 21 September 2025 before 11:59 PM. B. MLA B Questions. TBA C. MLA C -Works Cited Page – TBA PART A:   DUE:  Sunday 21 September before 11:59 PM 1. Creating the MLA formatted paper. Let’s look at the Purdue OWL in the link below: This will let you see the guide we will be using – you do not have to learn it -it is only a resource and support to help you write your academic papers using MLA format – This guide can also help you write in other styles such as APA and Chicago style. We only use MLA in my course. I hope some of these links help you – did you find better ones? https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_general_format.html Let’s look at how to create your own MLA formatted paper! https://www.youtube.com/watch?v=6-McoBrArjM Your Turn! 1. Open a BLANK WORD DOC. Do not use Google, pages, or a PDF. Then create your own MLA formatted page for this assignment by watching the video. Need Help? I will be holding MLA LIVE workshops each evening on Zoom from 9 PM – 11 PM AZ time on Wednesday and Thursday 9/17 and 9/18 AZ Time, Day and Date. https://asu.zoom.us/j/2525179319 If you need help and cannot make it to the scheduled Office hours and workshops - email me and let’s make an online appointment! Some Corrections to the video if you do not have the latest version of MS Word: a. You do not have to check for margins. If you are using MS Word, you will automatically have one-inch margins all the way around. b.    In the Header, I do not want you to use our Section Number for my class. c.   Some of you may have to do the following to get to your pages: Click on the word Insert from the Home page/button on the top menu of the MS Word program to create your pages. Next, find the box that says Page Number and click on the arrow down (menu) and Click on Top of the Page and then Plain Number 3. The rest of the Instructions are ok to use in the Video! d. In the body paragraphs, he is using non words and just hitting the keys on the keyboard after the first sentence of each paragraph simply to show you what the body paragraphs will look like in MLA format and how to create them! You can use your own "dkgealikgraeitgae” if you like as well. Just make sure you have the same number of make-believe sentences to understand paragraph construction in MLA format, OK? Please email me if you are confused! I am here to help! e. The Title you will use on the formatted paper is the same that was used in the video: How to Format a Paper in MLA Style. The rest of the Instructions are ok to use in the Video! He is like 2. SAVE YOUR Word Doc File as Your Last Name - MLA Creation 3. The Title you will use on the formatted paper is the same used in the video: 4. Post your MLA paper to the Canvas Assignment Area found in Module Two called (click on the link in Canvas to get to it): MLA Creation Due Sunday: 21 September before 11:59 PM. WEEK SIX –  MLA B - To be posted later this weekend. Reading to be done for Assignment # 5 and # 6 – simply read the links. You will need to use the Purdue Owl – to help you get the correct answers for both B and C assignments. The menu in the Purdue Owl for everything you need is on the left-hand side.

$25.00 View