Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Project 4 Advanced Options Pricing and Risk Management SQL

Project 4: Advanced Options Pricing and Risk Management Learning Objectives: · Apply the Monte Carlo Method and Binomial Lattice models to price call and put options. · Compare and assess the accuracy of different pricing models relative to real market data. · Understand the assumptions and limitations of volatility models and their impact on options pricing. · Develop and analyze Delta Neutral Portfolios to minimize market risks using basic option strategies. Motivation: Options pricing and risk management are crucial skills in financial markets. This project combines theoretical models with practical application to simulate real-world pricing scenarios for options. By implementing various pricing models, including the Monte Carlo and Binomial Lattice methods, and constructing risk-neutral portfolios, students will learn how to effectively manage risk and understand the factors influencing option prices. This hands-on approach will reinforce key concepts in financial modeling and help students become more proficient in interpreting market data. Part 1: Pricing Options Using Simulations (250 points) In this section, you will create different pricing models for a non-dividend-paying stock and explore the use of Monte Carlo simulations and Binomial Lattices. 1. Monte Carlo Simulations: · Explanation: The Monte Carlo method simulates the possible future price paths of an underlying asset by generating random price movements based on its volatility and time to maturity. It then averages the outcomes to determine the option price. · Tasks: o Run 100 simulations to price a Call option. o Run 100 simulations to price a Put option. · Goal: Understand how random price movements affect the value of an option, and observe how averaging across many simulations improves the estimate of the option price. 2. Binomial Lattice Models: · Explanation: The Binomial Lattice model divides the option’s time to expiration into several periods, where the price of the underlying asset moves up or down at each step. By working backward through the lattice, we can compute the option's value at each node and ultimately arrive at its current price. · Tasks: o Build a 25-period lattice for a Call option. o Build a 25-period lattice for a Put option. · Goal: Learn how the lattice model approximates option prices by simulating price changes in discrete steps over time. 3. Compare Model Prices with Market Prices: · Use Drexel University’s Black-Scholes Merton calculator to determine the theoretical price of the Call and Put options. https://www.math.drexel.edu/~pg/fin/VanillaCalculator.html · Compare your results from both the Monte Carlo and Binomial Lattice models to the Black-Scholes price and the actual market prices (from Yahoo Finance). · Calculate the error between the simulated prices and the market prices using the absolute value of the differences. Explanation: This step will help you assess the accuracy of each model. The Black-Scholes Merton model provides a widely accepted theoretical price, while market prices reflect real-world conditions. By comparing these with your simulated results, you can evaluate the strengths and limitations of each model. 4. Extended Monte Carlo Simulations: · Run additional simulations for Call options using: o 250 simulations o 500 simulations · Explanation: By increasing the number of simulations, you improve the accuracy of the Monte Carlo model. This task will help you observe how the number of simulations affects the stability and precision of the option price estimates. Goal: Compare the results of different simulation runs to see how increasing the number of simulations impacts pricing accuracy. Part 2: Delta Neutral Portfolio Construction (250 points) In this section, you will construct Delta Neutral Portfolios using different options strategies. 1. Delta Neutral Portfolio: · Explanation: A Delta Neutral Portfolio involves balancing the quantities of options and the underlying asset so that the portfolio’s value is insensitive to small changes in the price of the underlying asset. Delta measures how much an option's price is expected to move per $1 change in the price of the underlying asset. By creating a portfolio where the Deltas of the long and short positions offset each other, you can reduce risk. · Tasks: o Construct a Delta Neutral Portfolio for: § Long Call § Short Call § Long Put § Short Put · Assumptions: o Gamma = 0 (this means there’s no curvature, or non-linearity, in how Delta changes with price). o All other Greeks (e.g., Theta, Vega) remain constant. Goal: Learn how to hedge against price changes by neutralizing the Delta of your positions, an essential risk management technique. Part 3: Readme and Reflections (250 points) This section focuses on reflecting on your models and assumptions. You will answer questions that encourage critical thinking about the accuracy of models, assumptions about volatility, and the Greeks’ influence on option prices. 1. Volatility Assumptions: · Explanation: Volatility refers to the degree of variation in the price of the underlying asset. In this project, you are making assumptions about future volatility based on historical data or implied volatility from the options market. · Question: What assumptions are you making about volatility in your models? Are they realistic? 2. Improving Binomial Lattice Accuracy: · Explanation: A Binomial Lattice with more time periods will provide a more refined approximation of the underlying asset’s possible future prices. · Question: How can you adjust the number of periods in the Binomial Lattice to increase accuracy? 3. Improving Monte Carlo Accuracy: · Explanation: More simulations in the Monte Carlo model will lead to a more accurate estimate of the option price by reducing the error due to randomness. · Question: How does increasing the number of simulations in the Monte Carlo Method improve accuracy? 4. Impact of Increasing Simulations: · Question: What would happen to call prices if you increased the number of simulations from 100 to 10,000, 100,000, or even 10 million simulations? Would you expect the call prices to stabilize or fluctuate more? 5. Model Accuracy Comparison: · Question: Compare the accuracy of the Binomial Lattice, Monte Carlo, and Black-Scholes Merton models. Which model provided the most accurate results relative to the actual market price, and why? 6. Factors Affecting Call Prices: · Explanation: The Greeks help you understand how different factors impact the price of a Call option. · Question: How do the following factors correlate with the price of a Call option? (Hint: refer to the Greeks for your explanations) o Stock Price o Interest Rates o Time to Maturity o Volatility 7. Factors Affecting Put Prices: · Explanation: The same Greeks also impact Put options, but in different ways compared to Calls. · Question: How do the following factors correlate with the price of a Put option? (Hint: refer to the Greeks for your explanations) o Stock Price o Interest Rates o Time to Maturity o Volatility 8. Straddle Strategy and Volatility: · Explanation: A long straddle involves buying both a Call and a Put option with the same strike price and expiration date. This strategy benefits from large price movements in either direction, and thus, is sensitive to volatility. · Question: Reflect on your work with the straddle strategy from pset 3. Why does volatility have a positive correlation with the price of options, particularly in straddle strategies? Comment on how the straddle’s legs (Call and Put) are impacted by volatility.

$25.00 View

[SOLVED] Practice Exam 2 Math 354 Fall 2024

Practice Exam 2, Math 354 Sections 01-02, Fall 2024 1:  (40 pts.)  Consider the following linear programming problem: maximize z = 3x1 + 2x2 − x3 + 3x4, subject to the constraints The final tableau to this linear programming problem is given by (a)  (4 pts.)  What basic feasible solution does the tableau above represent? (b)  (10 pts.) What is the dual problem to the linear programming problem above, and what is its optimal solution and optimal value? (c)  (6 pts.)  Use complementary  slackness theorem to find the values of slack variables of the dual problem at the optimal solution. (d)  (20 pts.)  What is the optimal solution to this programming problem if we add the con- straint that x1 , x2 , x3 , x4  are integers? 2:  (30 pts.)  Consider the following linear programming problem: maximize z = 3x1 + x2 + 2x3, subject to the constraints The final tableau to this linear programming problem is given by (a)  (5 pts.)  Suppose the objective function is replaced with z = 3x1  + c2(′)x2  + 2x3 .  Find the range of ∆c2  = c2(′) − c2  such that the solution corresponding to the final tableau is still optimal. (b)  (10 pts.)  Suppose the objective function is replaced with z = c1(′)x1  + x2  + 2x3 .  Find the range of ∆c1  = c1(′) − c1  such that the solution corresponding to the final tableau is still optimal. (c)  (15 pts.)  Suppose the first constraint above is replaced with x1  − 3x2  + 3x3  ≤ 10.  What is the new optimal solution? 3:  (30 pts.) For the following transportation problem. (a)  (6 pts.) Use minimal cost method to construct an initial basic feasible solution. (b)  (6 pts.) Use Vogel’s method to construct an initial basic feasible solution. (c)  (18 pts.) Starting with the basic feasible solution in part (a), find the optimal solution, and the minimal cost.

$25.00 View

[SOLVED] Database Development and Design DTS207TC Assessment 002 Individual CourseworkSQL

Database Development and Design (DTS207TC) Assessment 002: Individual Coursework Due: Dec 24th, 2024 @ 23:59 Weight: 40% Maximum Marks: 100 Overview & Outcomes This course work will be assessed for the following learning outcomes: C. Illustrate the issues related to Web technologies and DBMS and XML as a semi-structured data representation formalism. D. Identify the principles underlying object-relational models. Submission You must submit the following files to LMO: 1)A report named as Your_Student_ID.pdf. 2)A directory containing all your source code, named as Your_Student_ID_code. NOTE: The report shall be in A4 size, size 11 font, and shall not exceed 8 pages in length. You can include only key code snippets in your reports. The complete source code can be placed in the attachment. Assessment Tasks Now we have some stock-related datasets in XML format (attached). We would like to put it on a website for users to query. 1)   Browse through these XML files in the attachment, and define DTD and XML Schema for  them. Use both definitions to validate the XML files and manually fix any potential errors. Extract the file headers from the XML Schema and convert the XML to CSV. Open the generated CSV with any editor and take a screenshot. (20 Marks) 2)   Use flask_sqlalchemy in Flask to build an ORM for the CSV from task 1), and import the   data into PostgreSQL. Manually draw an Entity-Relationship diagram for the three tables, take a photo, and include it in the report. (20 Marks) 3)   Use Flask to implement the required web page as shown in the diagram, which includes a  table with the necessary fields. To differentiate yourself, you can set the form. style. to your preference and take a screenshot. (20 Marks) 4)   Based on task 3), add filtering functionalities for stock name, start time, and end time, implementing a page as shown below. Note that one or more of these filter conditions can be empty, meaning no filtering based on that condition. To differentiate yourself, you can  set the form. style. to your preference and take a screenshot. (20 Marks) 5)   Use the provided testing program to perform. a performance test on task 4). The program   uses a POST request to query with all conditions set to empty, which should return the full  result set. As long as the returned content is correct, you can optimize performance in any  way. Take a screenshot of the test results. Ideal performance should be no higher than 0.2 seconds per query. (20 Marks) NOTE: a.    Provide a brief introduction to the program logic in your own words; including code snippets is encouraged, but please do not directly paste the entire program into the report without explanation; b.   For your full academic development, the use of generative AI to gain inspiration is allowed  for this assignment; however, out of mutual respect, please do not directly paste its output into your assignment and submit it; c.   To prove that you have indeed completed this assignment and did not rely solely on generative AI, please provide screenshots of the running results for each task Marking Criteria The tasks in this assessment can be divided into 3 categories: Charts Presentation & Analysis; Essay; Programs. Criteria(%) Exemplary (100) Good (75) Satisfactory (50) Limited (25) Very Limited (0) Design Provides a detailed, accurate description of the methods. Provide comprehensive comparison between the methods, including pros and cons, performance analysis. The analysis provided demonstrates that the student's understanding of the various methods is correct and that they have the ability to solve problems independently. Although there are certain flaws, or incomplete. Provides adequate description of the methods. Comparison is provided with some level of details, however, with some obvious mistakes. There are obvious deviations in the understanding of the main methods, and it fails to reflect the ability to independently design algorithms. The description of the problem is vague, or the thought is incomplete. Limited or no description of methods. Limited comparison provided. Programs Demonstrated correctly implemented code that produces correct output. Excellent coding quality follows best practices. The program runs correctly and gives the expected results. However, special cases are not fully considered, or the program performs redundant calculations. Program basically works correctly for major functionality, however, with some conceptional problems. The program implements some minor functionality, or incorrectly implements major functionality. There is a certain degree of misunderstanding about the requirements of the questions. Program works incorrectly with limited attempt or irrelevant to the task. Charts Excellent Most of the Moderate Only some of the Limited or no Presentation quality of results in the quality of results in the attempt of & Analysis report with chart are report with chart are correct, report. clear structure, correct, but basic or some of them clear logic, there is a structure, are not filled in. concise writing, pleasing visual aids. certain degree of sloppy or wordy in the overview and analysis. where writing and visual aids can be improved. The analysis of the results was obviously biased. The mark allocations for the above tasks are: Task Design Programs Charts Presentation & Analysis 1 0 17 3 2 15 5 0 3 10 5 5 4 0 15 5 5 10 8 2

$25.00 View

[SOLVED] COMP3134 Business Intelligence and Customer Relationship Management 2024-2025 Semester One M

COMP3134 Business Intelligence and Customer Relationship Management 2024-2025 Semester One Group Project (25%) Deadline - week 13 (1 Dec Sunday, 23:59) You need to submit a report (with a maximum of 25 A4 pages), a set of PPT slides and a presentation in recorded video of around 30 minutes). All group members must take part in the presentation). Assuming that you are working in a marketing consulting company, and your team consists of CRM experts and BI professionals. Your team has been assigned to work with three organizations to solve their business problems, details are as follows: - Organization A (Understanding Customer Relationship through Market Research) This organization would like to invite your team to assist them to understand its current relationship with customers. In particular, the organization wants to collect the following information through market research of about 200 of their existing customers, and from any other available information possible. l Customer experience and satisfaction of the organization’s customization and personalization services. l The driving forces and factors for the organization to building and nourishing brand loyalty. l Channel effectiveness in building relationship with customers. You may select a product or service that you project group is interested and carry out market research to answer the above three questions. The following research methods can be used, and you are free to select or integrate them in your research design. 1.      Focus group discussion. 2.      Interviews. 3.      Emails, Telephones, and/or observations. 4.      Questionnaires. 5.      Review from other reported sources. Organization B (Predictive Analytics) This organization is planning a Xmas sales promotion campaign for their products. However, this organization doesn’t know the customers’ purchasing behavior. in the past and is not able to make any decision how such a Xmas sales promotion should be designed and launched. Being a group of CRM and BI experts, you will use predictive analytics to analyze its customer purchasing data and suggest a strategy to select their product mix in the coming Xmas promotion campaign. The following results are particularly useful for such planning: - l Classification (discover possible casual relationships among data). l Clustering (discover similarities and differences among clusters). l Associations (discover interesting relationships among data). The company’s data set is given to you. File name is Super Market Analysis. You are free to use any software, or publicly available methods and packages to help with your predictive analytics. Organization C (Prescriptive analytics) Organization C is a public health service provider, and the organization is having a problem in scheduling resources to service the public. In the past, many patients have cancelled their appointments with the doctors, and these make the organization’s resources not being effectively used. The organization is thinking of changing the one patient to one doctor appointment practice to many patients to many doctors’ appointments practice, i.e., more patients will be scheduled in the same time slot, so that if some of them were absent, there are still enough patients for the doctors. Being a group of CRM and BI experts, you decided to use prescriptive analytics to analyze the organization’s past patient no show records and suggest a strategy to carryout the many patients to many doctors’ appointments practice. The following results are particularly useful for such planning: - l Statistical analysis of the characteristics of show / and no-show cases. l What-if analysis. The company’s data set is given to you. File name is healthcare_noshows_appointments. You are free to use any software, or publicly available methods and packages to help with your prescriptive analytics.    

$25.00 View

[SOLVED] ECE 101A Engineering Electromagnetics Homework6Haskell

ECE 101A Engineering Electromagnetics Homework#6 Due: Wednesday, November 20th, 23:59 Please submit your homework to Gradescope Problem 1 Infinite sheet current (a) Find the B-field and H-field created by an infinite, uniform. sheet current density that flows on the x-y plane along the x-direction (i.e. surface current) of (Jo is a constant with units A/m). How does the field depend upon the distance from the surface? (b) Repeat part (a) for an infinite slab of current density extending in the x-y plane, whose density is given by (JI is a constant bulk current density with units A/m2). Make sure you give the field for all values of z, and don't forget the direction. (c) Now the current density in part (b) is changed to I>how does the field depend upon the distance from the surface? Problem 2 Obtain an expression for the self-inductance per unit length for the parallel wire transmission line of the figure below in terms of a, d, and μ, where a is the radius of the wires, d is the axis-to-axis  distance between the wires, and μ is the permeability of the medium in which they reside. Problem 3 In terms of the dc current I, how much magnetic energy is stored in the insulating medium of a 3 m long, air-filled section of a coaxial transmission line, given that the radius of the inner conductor is 5 cm and the inner radius of the outer conductor is 10 cm? Problem 4 A wire is formed into a square loop and placed in the x–y plane with its center at the origin and each of its sides parallel to either the x or y axes. Each side is 40 cm in length, and the wire carries a current of 5 A whose direction is clockwise when the loop is viewed from above. Calculate the magnetic field at the center of the loop.

$25.00 View

[SOLVED] ST5226 Spatial Statistics Project Assignment AY 2024/2025R

ST5226:  Spatial Statistics Project Assignment, AY 2024/2025 Due on Friday, 22 November, 2024 Instructions: 1.  This project assignment consists of two parts. The total marks of this assignment is 60. 2.  Please upload your work in 1 single PDF file to the Canvas assignment “Project” . The size of your file should be no more than 10Mb.  The deadline for submission is 11:59pm, Friday, 22 November, 2024. 3. If you compile the files for Part I and Part II separately, you should merge two PDF files into one and submit one single PDF file. 4. If your student number is XXX, please name your file XXX.pdf For example, if your matriculation number is A0012345R, your file should be named A0012345R.pdf, with no other prefix or suffix. You can submit multiple times in Canvas. However, only the last submission will be marked. 5.  Please write your student number and name on the first page of your file. 6. You should use either Markdown, Knitr, or LaTeX to compile your final file.    You must ensure that your codes can be copied  and  pasted from the pdf file. Screen- shots/Pictures of programming codes are unacceptable and will be regarded as incomplete work. If your codes do not produce the correct output, then your solution will be subject to mark deduction. 7.  No hard copy will be accepted. 8.  No late submission will be accepted (i.e., marks for your project = zero). 9. You are encouraged to discuss with classmates or me if you have any questions.  However, copying homework solutions is strictly prohibited. Part I (30 marks):  Analysis of Lip Cancer Data in Scotland This part consists of several problems related to the data file Scotland .rds. The data set contains an sf object that consists of the following variables: ●  cancer: number of lip cancer cases in the district ● expected: expected number ● logratio: log transformed ratio = log(Cancer/expected). The last two records have used a correction with Cancer = 0.5 because the recorded cancer counts are zero. ● varlogratio: the variance of the logratio, depends on the number of cases and the expected count ● northkm: Northing, in km, approximately centered so 0 is the location of Stirling ● eastkm: Easting, in km, approximately centered so 0 is the location of Stirling ● percentAFF: percent of the population in Agriculture, Forestry, and Fisheries. Why is Stirling used as the center? Historically, Stirling served as the medieval capital of Scotland and lies close to the geographical center of the country. It’s perhaps most renowned for the Battle of Stirling Bridge, where the Scottish forces led by William Wallace and Andrew Moray defeated the English army.  This battle was famously depicted in Mel Gibson’s 1995 film Braveheart—though, interestingly, the movie omits the bridge that played a critical role in the actual battle. 1.  (15 marks) Explore the spatial autocorrelation. (a)  (3 marks) Create three plots for the k-nearest neighbors of all regions, for the values of k = 1, 2, 3. For each of the three plots, are all the regions connected in the graph? (b)  (3 marks) Consider the queen-style. and rook-style. neighbors based on contiguity.  Identify all the regions whose queen-style. neighbors and rook-style. neighbors are different,  by giving the names of these regions.  Highlight these regions and their neighboring regions in the map.  You should fill these regions with one color and their neighboring regions with another color. (c)  (3 marks) Consider the count of lip cancers in the variable cancer.  Compute and report the Moran’s I statistic using the rook-style. neighborhood and B-style. weights. Then use the Moran’s I to determine whether there exists positive spatial dependence for cancer, in each of the three tests:  (i) normal test (without skewness correction); (ii) permutation test; (iii) Monte Carlo test by assuming that the count in each region follows a Poisson distribution with expectation given in the variable expected. Comment on your results. (d)  (3 marks) Repeat Part (c) with Geary’sc statistic.  Do you see any difference in the results and your conclusion? (e)  (3 marks) Consider the variable logratio. Compute the local Moran’ I statistics for all districts of Scotland, using the rook-style. neighborhood and B-style. weights. Plot a map of all districts in Scotland, and highlight those districts with significant local Moran’I on the map at the significance level 0.05.  You should use different colors for the four types of significant districts: High-High, High-Low, Low-High, and Low-Low. 2.  (15  marks) Modeling the areal data.   One question of interest is the association between working outdoors and lip cancer (measured by logratio). The percentAFF variable quan- tifies the percent of the district population working in Agriculture, Forestry, or Fishing.  All three occupations require large amounts of outdoor time.  In addition, we also want to know whether there are any spatial trends in the north-south and east-west directions. (a)  (3 marks) Fit a simple linear regression model, using logratio as the response variable, and using percentAFF, eastkm, and northkm as the predictors.  Report the summary of model fit and determine which of these predictors are significant.  Drop any insignificant predictor(s) from the model and refit the model with only significant predictors. Use plots to check if there is any violation of homogeneous variance assumption or the normality assumption. (b)  (3 marks) Following the last model you have fitted in  (a), refit it using weighted least squares by assuming that the individual variance is proportional to the reciprocal of the variable expected. Report the summary of model fit. Compare the results with those in (a). Which model is better? (c)  (3 marks) Following (b), for the better model you have determined in (b), we check if it is necessary to fit a further model to account for spatial dependence in the residuals.  Use gls() to fit a model such that the residuals aremodeled by an exponential semivariogram. Report the summary of model fit.  Compare the new model with those in (a) and (b). Determine which model gives the best fit to the data. (d)  (3 marks) Following (a), for only the significant predictors in (a), fit the one-parameter SAR model using the B-style. weights with rook-style. neighbors and constant error vari- ance. Report the summary of model fit. Use the permutation test based on Moran’s I to determine if there exists positive spatial dependence in the residuals.  Does the conclusion from this test agree with the test for spatial dependence in the summary of SAR model? (e)  (3 marks) Repeat  (d) for the one-parameter CAR model.  Does your conclusion change regarding the spatial dependence in the residuals? Part II (30 marks):  Course Project You are required to download a real-world spatial dataset and write a report, 5–10 pages in length, that includes a statistical analysis of the data.  The detailed instructions are as follows. (a) Your data must be real-world spatial data, specifically either geostatistical data or areal data. Using any other type of data will result in a score of zero for Part II of this assignment. (b) Your dataset must fulfill the following requirements: ● It contains at least 1 response variable and at least 1 predictor variable.  The response variable(s) must be spatially varying variable(s); ● It contains real geographical information (locations, or shapes of regions, etc.); ● It contains at least 50 observations, but no more than 1 million observations. You are allowed to combine several real-data datasets into one dataset.   For example, one dataset contains only the precipitation in different areas of Singapore, while another dataset contains the geographical boundaries of all districts in Singapore  (such as the data I have provided for Homework 1). You can combine them into one dataset and use it for subsequent analysis. You do not need to submit your data. (c)  The first section of your report must include a brief overview of the dataset, including the following information: ● The source of your data, and the type of the dataset (geostatistical or areal); ● The definition of all used variables in your dataset, the total number of observations, whether there are missing values, etc. ● The problem you want to investigate. You may want to check some background information of your data if necessary. ● An overview of statistical methods that you will use in the analysis. (d)  The main body of your report should include a comprehensive statistical analysis of the data. Your need to include the following: ● Simple descriptive analysis, such as maps showing the original variables in the dataset; ● Statistical models you want to apply.  For regression or kriging models, please indi-cate clearly which variable is the response and which are the predictors. Your analysis should account for possible spatial dependence. You may use math formulas occasion- ally for clarity, such as an equation for some regression model.  However, long math derivation will inevitably incur mark deduction because this is an applied project. ● Codes and outputs for the statistical analysis; your interpretation and discussion. You should ensure that your analysis is logically sound and that every statistical model you propose is well justified.  You may refer to the sample analyses in our lecture notes, tutorials, and homework assignments. (e)  The last section of your report should be a brief discussion or conclusion that summarizes your findings. (f) You are strongly encouraged to use the various methods and techniques we have intro- duced in this course so far.  You may also use other spatial models beyond our course materials that are suitable for your data, as long as you provide sufficient justification. (g) You will be heavily penalized for using or unnecessarily discussing any methods, models, or algorithms unrelated to spatial statistics. (h)  Report Format Requirement: ● The report must be written in either Markdown, Knitr, or LaTeX, where the font size of your main text should be at least 11; ● The total length of your report should be from 5 to 10 pages including everything. A report exceeding 10 pages will be subject to mark deduction.   Note:  A longer report within this range does not guarantee a higher grade. ● The total number of figures in your report should be between 4 and 8. ● The total lines of code should range from 50 to 200, including all headers such as library() but excluding blank lines or spacing. (i)  Here are some useful sources for downloading spatial datasets: ● R CRAN Task View:  Analysis of Spatial Data.   Some  R  packages  contain many spatial datasets, such as rnaturalearth, spData, SpatialDatasets, etc.  You can google and find more. ● GeoDa Data and Lab ● Singapore’s open data portal ● NUS GIS webpage ● COVID-19 data, such as COVID tracking in US, COVID-19 Data Hub, etc. You can also find spatial datasets from research papers.  Please make sure that you cite the data source in the first section of your report.

$25.00 View

[SOLVED] EE512 Stochastic Process for Financial Engineering Fall 2024 Homework 10 SQL

Homework #10 EE512:  Stochastic Process for Financial Engineering Fall 2024 Assigned on: April 13, 2024 Due on: November 26, 2024 November 13, 2024 Note:  All problems are taken from the book “A First Course in Stochastic Calculus” by Louis-Pierre Arguin, AMS,2020. Chapter 10 •  10.10.  Numerical Projects and Exercises:  10.1, 10.2 •  Exercises: 10.1, 10.2 [correction:change the last long put with K=200 to 1 long put with K=250] (a, b), 10.4, 10.5, 10.6(a), 10.7(a)

$25.00 View

[SOLVED] GEOG100 OL01 - Fall 2022R

StoryMaps Project Overview GEOG100 OL01 - Fall 2022 Instead of writing a term paper for this assignment, you will design and produce an interactive digital map using the platform. ESRI StoryMaps. Your map will explore the significance of space, place, and scale to a place, thing, or event. Your Story Map will be organized in a way that directly addresses the following prompt: Human geographers explore the interrelationships between people and places. Explore your topic by asking: how are/have humans shaped or impacted this place, thing, or event?  How has/is this place, thing, or event shaped or impacted humans? Respond to these questions by organizing your StoryMap around geographical processes pertaining to your topic. To do so, find a variety of ways to communicate the features, elements, and interactions that shape the identity of your chosen place, thing, or event using a variety of scales (space and time). Why are we doing this? This project will help you to explore significance of something or somewhere using the spatial analysis toolbox of a human geographer.  You will further develop your geographic literacy (geo-literacy), which means having a spatial awareness that enables you to see and understand patterns, distributions, relationships, and interactions of physical and human realms. StoryMaps is a form. of digital storytelling that offers the opportunity for you to connect global or big-picture issues and ideas to local communities; to visualize your learning; and engage in interdisciplinarity (e.g. by layering data sets from multiple disciplines or time periods, showing relationships between people and places). You will learn how to transfer primary source information into digital map points, to create maps and pair them with text to form. a compelling narrative argument, and you will communicate this information with your classmates and publish it to a wider audience (through StoryMaps shareable digital platform).  These are all excellent skills to have for your present academic and professional career. Learning Goals •  Identify and visualize relationships between people and places •  Identify and visualize changes in a place, thing, event, or landscape over time •  Identify reliable data sources and explain how to assess data •  Use in-built citations and references •  Identify and collect your own data to help analyze a topic •  Develop ICT skills •  Use an inquiry approach in geography that describe the ‘what’ and ‘where’ of an issue or pattern by displaying both qualitative and quantitative data •  Communicate meaning through data visualization • Assess and evaluate your own work, and the work of others How are we doing this? This is a big project, so we are taking a ‘scafolding’ approach by breaking up the work over the term into three preparatory assignments, one final submission in the form. of a StoryMaps Showcase, and a round of overall feedback and reflection.  Some of these elements will receive feedback from your instructor and tutor markers, others will undergo peer-feedback. In our course Canvas modules, you will find three self-paced StoryMaps tutorials.  Each of these tutorials corresponds with a portion of your assignment. ASSIGNMENT BREAKDOWN Part I - Project Plan Graded Discussion - due 9 October Step 1: Read through Tutorial 1: Get Started with ArcGIS StoryMaps (including following the links to find inspiration, and the guide for planning and outlining your story) Step 2: Set up an ArcGIS account. (Instructions in ‘Assignment Instructions’ Module) Step 3: Sign up for a thematic group under People -> Groups -> StoryMaps Thematic Groups Step 4: Download the Project Plan Worksheet (available in the Assignment submission portal) Step 5: Complete and submit the worksheet (please type into the docx and re-upload the document as your submission). Due date: October 9 Step 6: After your individual submission, you will have a week to read through each of your group member’s project plans and leave feedback in the form. of a comment on their post.  Your feedback should list (at a minimum): a.   something that is exciting or interesting about the student’s project description, b.   something you think might improve the final submission (it could be an additional source you know of, something you think the project should highlight, a way of organizing the StoryMap, or a piece of advice in response to a question they have) Part II - StoryBoard Graded Discussion - due 30 October Part II of your StoryMap assignment asks you to submit a StoryBoard Template.  This takes the three broad geographical elements from your project plan, and helps you plan out exactly how  you will explain, represent, and source these elements. A Story Board functions as the outline for your StoryMap, allowing you to sketch out the components of your project before jumping into the program. Below are a few StoryBoard examples to model your own on, each of which corresponds to a different StoryMap template in the old builder or new builder. Detailed labour instructions: Step 1: Choose or create a table that corresponds to your chosen template (guided map tour, cascade, tour, express map, etc.  No matter what template you choose, everyone’s final StoryMap is expected to have: (1)  A 300 word introduction that introduces the topic, its significance, and presents a thesis statement that responds directly to one or both of the assignment prompts (How are/have humans shaped or impacted this place, thing, or event?  How has/is this place, thing, or event shaped or impacted humans?) (2)  3 distinct sections, each organized around a geographical element (refer to your worksheet for Part I) - each section could contain ~200 words; (3)  a 200 word conclusion that summarizes the geographical elements, tying them together in a way that supports the thesis statement presented in the introduction. Below is an example, but there are other templates available for other builders available here. Step 2: Following the instructions below, fill in your Template with details about your StoryMap New Builder Template (Map, Guided Map Tour) Section Text summary (what will you communicat e using words?) Image(s) (how will you communicate your arguments and ideas using images/ graphics?) Map(s) (What maps best represent your argument and ideas? How will those be displayed for maximum impact?) Citation(s) APA Format (How will you support your argument using reliable, academic sources?  You should have 2-3 sources per section; plus 3-4 in the intro) Introduction         Section 1         Section 2         Section 3         Conclusion       No ‘new' sources should appear in the conclusion! Step 3: Submit your completed StoryBoard Template as an attachment to the graded discussion board. Due date: October 30 Step 4: After your individual submission, you will have a week to read through each of your group member’s project plans and leave feedback in the form. of a comment on their post.  Your feedback should list (at a minimum): c.   something that is exciting or interesting about the student’s StoryBoard template, d.   something you think might improve the final submission (it could be an additional source you know of, something you think the project should highlight, a way of organizing the StoryMap, or a piece of advice in response to a question they have) Part III: Process Reflection + Plan for Completion Individual Assignment - due 6 November After completing Parts I and II of this assignment, you now have an overall project plan, an  overview of the StoryMap's components, a great list of sources, and lots of peer feedback. Part III asks you to set out a plan for you to  now focus on how you will build this project to the best of our ability.  By following the steps below, you will set goals and a schedule for your StoryMap's completion, and ask yourself some important questions along the way to ensure its strength and relevance. Detailed Labour Instructions: Step 1: Read through the labour instructions for your StoryMap’s Final Evaluation (available in the StoryMaps Assignment Overview in the Assignment Instructions + Resources) Step 2: Respond to each the following prompts with ~50 word answers. 1.  What aspect of your StoryMap creation are you most looking forward to as you work towards completion of this assignment? 2.  After having read and thought about the final submission criteria, what element(s) of the assignment are you still unsure about? 3.  Where do you think you can find support for these element(s)?  (Hint: Librarian Sarah Zhang is hosting drop-in sessions, there are great resources online for building a StoryMap, and your peers might also be able to help you out!) 4.  What does success look like for you with this project? (Hint: this response should help you to set your own goals and vision for your StoryMap) Step 3: Complete the table below and submit as part of your worksheet.  You must have a minimum of 10 rows/tasks (far left side column). You should remove the 2 examples given in the template and replace with your own. Task Sub-Tasks Labour Estimate Personal Deadline Notes Finalize introduction text   •      Find a convincing ‘hook’ - something attention- grabbing to begin the StoryMap •     Write at thesis statement that responds to the assignment prompt •     Ask a classmate to provide peer    feedback on a final draft 2 hours November 10 Read through StoryMaps examples from the self-paced modules to get some good ideas and inspiration. Create Immersive Blocks (guided tour) to showcase my 3 geographic   elements •     Choose a focused map    panel to fill the block frame •     Compose the   text and media for each tour point 6 hours (~2 hours per geographic element) November 13 • Watch the guided tour tutorial for step- by-step instructions for building a guided tour • Attend the Drop- In StoryMaps workshop on June 10 Step 4:  Submit Part III in the form. of an attached file which has the 4 prompt responses and a completed Task Table included. Final Submission - due 20 November Step 1:  After you have finished your StoryMap according to all of the parameters outlined in Part II, click on the ‘Publish ' menu from the top right bar of their story map, choose 'Private', under the 'Group sharing', click the search box, the group "GEOG100 - 22FA " which you belong to should be showing up in a dropdown window, choose it, the group will be selected.   This means the you choose to share the story with the group only (the public will not be able to see it, although it's called 'publish').  Click Publish. Step 2:  Share your StoryMap’s URL link within your thematic group's graded discussion board before the due date.  This is your official ‘submission’ in Canvas. Step 3:  Once everyone’s map has been shared to the Group Discussion, we will begin the Q&A period.  Carefully read through and examine the StoryMap using the URL link that is posted by the student above yours in your group discussion thread (if you were the first one to post, read the last one on the thread). Step 4: Think of a question to ask the StoryMap author about the ideas and research presented on the StoryMap. Maybe you're not quite clear about something, or maybe you'd like more explanation on one of the figures; or maybe you have questions about the implications of the argument, etc.  Pose your question to your peer by replying to their post. The question is  due on Thursday 24 November, 11:59PST. The questions should be relevant to the StoryMap, and clearly demonstrate close and critical engagement with the StoryMap. Step 5:  Provide a thoughtful response to the question that was posed by another student to you.  Your answer is due on Sunday 27 November, 11:59PST. Your answer should use evidence from your research to fully answer the question, and demonstrate knowledge of your subject. Step 6: Read through all the StoryMaps in your group, and vote on the ONE poster you think is the best, by "liking" it. The poster with the most votes in each group will be highlighted for  the whole class to view. Make sure you do this by Sunday as well.  The StoryMaps with the most likes will be selected to share with the entire class. Your poster will receive feedback and evaluated qualitatively by the instructional team using the following criteria: ◦  The StoryMap contains an introduction (~300 words), 3 main sections (~200 words x 3) each organized around different geographic elements, and a conclusion (~200 words) ◦  The StoryMap contains a thesis that responds directly to the assignment prompt (Which is: Explore your topic by asking: how are/have humans shaped or impacted this place,    thing, or event?  How has/is this place, thing, or event shaped or impacted humans?). Need help? Try a thesis generator! (Links to an external site.) ◦   Map use and/or construction is good (are key locations highlighted? Are data layers clear, readable, and understandable? If symbols are used, are they clearly defined in a legend or elsewhere?) ◦   General design is adequate (well organized, easy to read, interactive elements work as expected, facilitates understanding of the topic) ◦   Data is relevant and well-sourced (see: Optional: Finding Scholarly Articles Tutorial (click on #1 to get started, then use Next to move through each step) ▪   Is the data source identified? ▪   Is the data source reliable? ▪   Are the data clearly presented? ▪   Does the presentation of the data introduce any bias through the way it is visually represented? ▪   Is there a good balance of primary and secondary sources? ◦   APA format is required for all citations. Links to an external site. and citations must be managed using the Credits section of your StoryMap as per this tutorial (Links to an external site.) ◦   No Plagiarism is present (be sure that you thoroughly understand what constitutes plagiarism by completing the SFU Plagiarism Tutorial)Links to an external site. ◦  Text explanation is present and meets expectations outlined in the project guidelines ◦  Added visuals or media are present, sources are identified via photo credits (Links to an external site.) , and visual/media are clearly connected to the topic at hand and facilitate understanding of that topic

$25.00 View

[SOLVED] Term Paper Medicine Wheel Pedagogy

Term Paper Medicine Wheel Pedagogy Cree Elder Michael Thrasher: https://www.edcan.ca/articles/teaching-by-the-medicine-wheel/ MW Pedagogy Paper: 25% (Due Week 12) For this assignment, you will use Medicine Wheel Pedagogy to develop an understanding of a concept from an Indigenous perspective. This will help you to understand this concept from a wholistic perspective. (We will discuss what it means to understand concepts wholistically in class lecture). Process is key for this assignment. Be sure to reflect on and write about the process of moving through each “quadrant” or “section” of the medicine wheel as you begin to understand the concept. How to do this assignment: In the EAST, you must position yourself and define the concept assigned to your class section – NOT Indigenous perspectives — the understanding YOU have of the concept prior to this assignment. My lectures can be cited for this section but ONLY this section. If you choose to use a dictionary definition or other sources, you MUST cite those sources. In the SOUTH section, tell us your personal experience with the assigned concept. NO citations are required in the SOUTH section of your paper because this section relates to the topic section. Based on the memories or personal associations you have with the assigned concept, tell that story. First, students will read and summarize, in their own words, the Indigenous resource assigned to each class in the WEST section of the paper. This resource is given below these instructions and will be available for download on Moodle. In this section, you must figure out what the concept means from an Indigenous perspective. NO marks will be given for this section in cases where direct quotes are used. Paraphrase only. This resource MUST be cited. In NORTH, the last section of the paper, tell me how and why YOU (not the government or Canadians) should/could honour Indigenous understandings of the concept assigned to your section while you are living and learning in Indigenous homelands. The objective of this section is to demonstrate political integrity—to honour through action First Peoples’ ways of being while benefitting from Indigenous homelands. Marks will be taken off assignments that perpetuate colonial perspectives in your in-class writing. Colonial perspectives will be discussed in Week 2 lecture. Assessment Criteria: 1.5-spaced 1,000 words 12pt font 1” margins APA citation  NO TITLE PAGE, ABSTRACT OR HEADER REQUIRED  Put your name, student number, date, instructor name, course and paper title at the top of your first page. A guiding template is posted in Week 1 of your INDG 101 Moodle.   Proof-read and edit your final paper AT LEAST 2 times before submitting it. It will affect your grade in positive ways.

$25.00 View

[SOLVED] ENG3004 Course work R

ENG3004 Course work (Due on 13 Dec 2024) Section A [15 marks] Simulations of EM fields Problem 1: A transverse electromagnetic wave with wavelength λ = 5cm travels in the positive x - direction. The electric and magnetic field components are given by E(x, t) = ̂(y)E0 sin(wt − kx) , H(x, t) = H0 sin(wt − kx + α). (a) By substituting the electric and magnetic fields into the Maxwell ’s equations, determine any relationship among the different symbols defined in the fields. (b) Write down an expression for the time-averaged power carried by the electromagnetic field. (c)  Calculate the propagation constant k. What is the phase velocity up of this electromagnetic field if frequency f is equal to 1 GHz? (d) Plot the two components of the electromagnetic wave at t = 0 for the region 0 ≤ x ≤ 20 cm. (e) Plot the time evolution of the electric and the magnetic fields at a location x  = 3 cm for the time interval of 0 ≤ t ≤ 2 ns. (f) Make a three dimensional plot of the above electromagnetic wave at t  = 0 for the region  0 ≤ x ≤ 20 cm. Show simultaneously the electric and magnetic field components in the plot. In the above plotting, please either use MATLAB or Mathematica, and list the essential part of your code if possible. For MATLAB, you may need to use plot and plot3 commands. For Mathematica, you may need to use Plot and Plot3d commands. Remember to give labels on the axis for the ease of reading your graphs. Problem 2: (a) A cylindrical arrangement has a solid inner material of radius a with volume charge density Pv  (in unit C/m3 ) which is surrounded by a thin metal shell of radius b with surface charge density Ps   (in unit C/m2 ). Using the Gauss’ law to calculate the electric flux density D as a function of radius r for regions (i) r < a, (ii) a < r < b, and (iii) r > b. (b) Using MATLAB or Mathematica, plot the magnitude of the electric flux density Dr , obtained     in     part     (a)      above     for     distances     0  < r < 15 cm given: Pv  = 3 nC/cm3 , Ps  = −3 nC/cm2 , a  = 5 cm, and b = 11 cm. Problem 3: (a) Sketch the electric flux density lines for the region a  < r < b discussed in Problem 2(a). Briefly discuss the direction of vectors. (b) A coaxial cable has an inner conductor of radius a = 5 cm and is surrounded by a grounded thin conductive shell of radius b = 11 cm. Assume the surface charge density on the inner conductor is Ps  = 3 nC/cm2 . Using MATLAB or Mathematica, make a two dimensional vector plot of the electric flux density D on a cross section of the coaxial cable. Please plot on the region −15 cm < x < 15 cm and −15 cm < y < 15 cm with an appropriate fine mesh to clearly see the directions of the vectors. At each grid point you should plot the vectors of the electric flux density. In MATLAB, you can consider the quiver command. In Mathematica, you can consider the VectorPlot command. Section B [25 marks]: Complete either one of the following case studies Case Study I: Antireflection coating for solar cells Choose any type of a solar cell panel and design an antireflection coating. Simulate the optical (light) response for this coating and evaluate its efficiency to reduce reflection and capability to operate in a wide-angle range. In this question, we use the transfer matrix method for technical investigation. 1.  Choose the material – either currently used or suggest a new one. Explain why did you choose this material for antireflection coating. This step requires you to do some reading on literature and please provide references. 2.  Provide  a  schematic  sketch  of  the  coating  design  with  an  indication  of  the geometrical parameters and describe the principle of the operation. 3.  Derive the equations to describe the reflection and transmission from the system with coating. Formulate your equations using the transfer matrix method. 4.  Plot reflection and transmission functions across the visible light range for normal incidence of light. 5.  Demonstrate the operational angle range for your coating. Plot in 3D for the reflection and transmission functions across the visible light range and angles of incidence from normal to horizontal (90 degrees). 6.  Discuss the efficiency of the designed antireflection coating. In the above, if you find out some other way to better visualize, describe or to achieve your analysis, you can also use the capability of MATLAB or Mathematica in solving partial differential equations to complete your analysis. Case Study II: Dispersion engineering of dielectric planar waveguide This question investigates the frequency dispersion of the waveguide modes on a dielectric planar waveguide, with core index n1, thickness h and a cladding index n2 . 1.  Choose common materials for the core and cladding if we would like to work in the infrared regime for optical communication. Explain the working principle of the waveguide  modes  and  the  possible  origin  of  dispersion.  Dispersion  means broadening of transmitted light pulse and it affect the data rate we can transmit signal across the waveguide. This step requires you to do some reading on literature and please provide references. 2.  Provide a schematic sketch of the dielectric planar waveguide with an indication of the geometric parameters and relevant essential labels to describe the principle of the waveguiding mechanism. 3.  Derive the mode equation for the system based on your own techniques or using the transfer matrix method. 4.  Plot the dispersion diagram ( w − k diagram). Two to three modes existing in the diagram will be enough. Describe how the group velocity varies with frequency. 5.  Choose a frequency that only one single mode exists, instead of multiple modes. Then try to determine (numerically) the group velocity at that frequency and the slope of its change against frequency. This is called the group velocity dispersion. 6.  Suppose the core index is fixed.  Discuss  how the cladding index and the core thickness with change the group velocity dispersion. Use numerical results or graphs if necessary to illustrate your observations. In the above, if you find out some other way to better visualize, describe or to achieve your analysis, you can also use the capability of MATLAB or Mathematica in solving partial differential equations to complete your analysis.

$25.00 View

[SOLVED] MSDS 490 Healthcare Analytics and Decision Making Homework Assignment 3 Matlab

MSDS 490: Healthcare Analytics and Decision Making Homework Assignment 3 Due Date: 12/02/2024 (Monday Midnight) Submission Instructions: zip all your solution files (data, R-code, Word document, figures, etc.)  into one, following the file naming convention Last Name First Name HW#.zip Use online submission tools in Canvas to submit this homework. Total Score:  100. Each Problem has an Equal Weight. 1.  (Basic  understanding  of Survival  Analysis).  Table 1 provides data on ten patients which enrolled in a clinical study that was conducted for 20 months. Table 1: Dataset for Kaplan-Meier Survival Curve Analysis (i)  Provide a table showing the desired calculations (patients at risk) to plot the Kaplan-Meier curve for patients in Group A, and Group B. (ii)  Provide confidence intervals for the KM survivor curves. (iii)  Perform. the log-rank test testing if the survival curve for patients in Group A is statistically different from that of patients in group B. (iv)  Develop the partial likelihood function to train for the coefficients of a Cox-Proportional Hazard model for patients in Group A. 2.  (Bias  in Survival Analysis).  The article [1] discusses two types of potential bias associated with Survival Analysis.  In your own words, describe these two types of biases.   Use  one  or  more  examples  from  papers  cited  in  this  section for illustration. 3.  (Discrete  Time  Markov  Chain) .  Progression of CD4 count of an HIV positive patient is described by a Markov chain with three states:  (0, 200), [200, 500), [500, ∞ ).  These states are labeled as 1, 2 and 3 respectively.  The probability transition matrix for patients transitioning from one state to another every three months is given in Table 2. Table 2: Probability Transition Matrix of CD4 counts in Standard Care Assume that an individual is recently diagnosed as HIV positive, and has his CD4 count in [500 , ∞ ).  (i) What is the probability that this person’s CD4 count will be in the range (0 , 200) in the fourth quarters after the initial diagnosis. (ii) What is the probability that this patient’s CD4 count will be in the range (0 , 200) for the first time in the fourth quarter after the initial diagnosis.  (iii) Assume that it costs 2 , 000 per quarter to care for a HIV+ patient in State 3, 5, 000 per quarter to care for the patient in State 2, and 10 , 000 per quarter to care for the patient in State 1. What is the expected cost of caring for this patient during the first year after diagnosis?  (iv) What is the steady state probability of finding the patient in States 1, 2, 3; and the corresponding steady state expected yearly cost of patient care. (v) Now assume that a new medication has become available in the market.  The medication adds $2,000 quarterly to the cost of standard care.   Clinical trial has shown that those on this medication have an improved Probability Transition Matrix of CD4 count.  This transition matrix is given in Table 3. Calculate the steady state yearly cost of Table 3: Probability Transition Matrix of CD4 counts in Improved Care care for the patient(s) who receive the new medication. 4.  Use the data in the above question and validate your answers for each question using a simulator.  Your simulation should perform. at least 100 replications when estimating the desired confidence intervals.  Note that your simulation should not perform. matrix-matrix product or matrix inversion calculations, i.e., the  only calculations you will perform. will involve matrix-vector products to keep track of patient transitioning. For achieving  “steady  state”,  you can run the patient transition for 25 quarters  (3-month  time intervals) starting from any state. 5.  (Partially  Observable  Markov  Decision  Process) .  The article [2] discusses the concept of Partially Observable Markov Decision Processes (POMDP) and its role in Medical Decision Making.  It discusses this concept with a prostate cancer screening example. You goal is to describe the concept of POMDP, with the necessary equations and subsequently its use by the authors in the cancer screening example. References [1]  Peter Groves, Basel Kayyali, David Knott, and Steve Van Kuiken.  A practical overview and reporting strategies for statistical analysis of survival studies.  CHEST, 158:S39–S48, 2020. [2]  Lauren N. Steimle and Brian T. Denton.  Markov  Decision  Processes for Screening  and  Treatment  of Chronic  Diseases, pages 189–222. Springer International Publishing, Cham, 2017.

$25.00 View

[SOLVED] CIVL 326 GEOTECHNICAL DESIGN REVIEW ASSIGNMENT 2 C/C

CIVL 326 GEOTECHNICAL DESIGN REVIEW ASSIGNMENT 2 Question 1 The rectangular footing below will carry 2 column loads as shown below. The footing is 3 m by 5 m and is installed 3.0 m below ground surface founded on a hard clay. The clay’s unit weight is 18.0 kN/m3 , its cohesion is 10 kPa and its angle of internal friction is 22° . Groundwater was not detected during the drilling program. Determine the ultimate bearing capacity for the footing, the minimum and maximum stresses below the footing based on the column loads and the factor of safety against bearing capacity failure for this problem. Is the footing safe? (qu  = 823.9 kPa, qmin  =  193.3 kPa, qmax  = 273.3 kPa, FS = 3.01) Scale: NTS Question 2 A proposed strip footing carrying a line load of 150 kN/m is to be constructed on a uniform. compact silty sand. The sand’s dry unit weight is 19.6 kN/m3, its cohesion is 5 kPa and its angle of internal friction is 28° . The depth of the footing will be 1.5 m below ground surface.  Groundwater  was  not  detected  during  the  drilling  program.  Determine  the minimum width of this proposed footing, using a factor of safety of 3, an assumed footing thickness of 0.5 m and a unit weight of concrete of 24 kN/m3 . (B = 0.8 m) Question 3 A 500 mm diameter smooth steel pipe pile is driven through stiff clay to a depth of 16.0  m. A site  investigation  showed that the  upper  5  m  of clay  had  an  unconfined compressive strength of 25 kPa while the lower clay had an unconfined compressive strength of 85 kPa. The groundwater table was located at 5.0 m below ground surface. Determine the total allowable vertical load on the  pile  using a factor of safety of 2. (QAllow ~ 400 kN depending on your selection of alpha) Question 4 A straight shaft augered pile is constructed in a clayey soil as shown in the figure. The diameter of the shaft is 0.75 m. The drilled pile extends to a total depth of 15 m. Soil conditions are as shown in the following figure. Groundwater was encountered at a depth of 8 m below ground surface. Compute the maximum allowable design load on the pile using a factor of safety of 2.5. (QAllow  = 1181.2 kN)

$25.00 View

[SOLVED] ECON1202 Quantitative Analysis for Business Economics T3 2024 Prolog

ECON1202 – Quantitative Analysis for Business & Economics ECON1202 – Excel Assignment (10%) T3, 2024 (Due Thursday 4.00pm, Week 11 (21/11/2024)) Purpose In this assignment, you will apply some of the quantitative methods covered in this course using Excel. Read the entire assignment carefully, and follow all instructions, including the submission instructions at the bottom of this assignment. Background Congratulations on your new role as a Portfolio Analyst at 1202 Capital! Your supervisor, Phuc, has assigned your first task: optimising an investment portfolio. You have already completed a preliminary task: identifying the three stocks that delivered the highest annual returns over the past year. After thorough research, you determined the top performers: Alphabet Inc. (GOOGL) with a return of 78.2%, The Toro Company (TTC) with an impressive 85.4% return, and FirstService Corporation (FSV) with a 60.8% return. To begin, you’ll work with the “Stock Data” file on Moodle, which includes monthly returns for these companies over the past five years. This information is captured in two matrices: •   The “ returns” matrix (r) contains the average monthly returns of the three stocks. •   The “Variance-Covariance” matrix (V) provides the monthly variances of the three stocks as well as covariances between the  stocks’  returns.  The variance of asset returns is a measure of how much an asset’s return varies with respect to its average returns. A large variance implies higher risk (in the sense that there is more variation around the average return) while a small variance indicates lower risk.  Covariance in the context of stock market indicates how any two assets’ returns move together. A positive covariance indicates that the two assets’ returns move in the same direction whereas a negative covariance implies  that the two assets’ returns move in the opposite direction. For a portfolio of 3 assets (say, A, B and C), the variance-covariance matrix will look like this: σA2, σB(2), σC(2) : variances of the returns of assets A, B, and C. σA,B : covariance of the returns of assets A and B. Other covariances can be interpreted similarly. Both the “returns” matrix and the “variance-covariance” matrix have already been filled out, so please do not modify them further. Refer to the “Q1-Q4” tabs in the spreadsheet to answer questions 1 through 4. Question 1 [3 marks] Phuc has decided to allocate half of the available capital to Alphabet Inc.  (GOOGL).  The remaining capital will  be split equally  between The Toro  Company  (TTC) and  FirstService Corporation (FSV). Your task is to calculate the portfolio's monthly expected return, using the following formula: rp  =  wrr w: the “weight” matrix (G17:G19). This matrix contains the portfolio’s weights for the three companies.  You will need to determine  these  weights  based  on  the  capital  distribution outlined above. r: the “ returns” matrix (G3:G5) described earlier in the assignment. rp : the portfolio’s monthly expected return. Report the answer in cell G21 in the spreadsheet. Question 2 [3 marks] Using the weights from Question 1, you will now calculate the variance of the portfolio. To do so, apply the following formula: varp  = wr vw w: the “weight” matrix (as calculated in Question 1) V: the  variance-covariance  matrix  of  the  portfolio’s  returns,  described  earlier  in  the assignment. varp : the portfolio’s variance. Report the answer in cell G24 in the spreadsheet. Question 3 [3 marks] Calculate the determinant of matrix V-1. Determine whether V is singular or non-singular. Report the determinant of matrix V-1  in cell G31 and report V-1  in G27:I29 in the spreadsheet. Question 4 [4 marks] After realising that the initial weights may not be optimal, Phuc has tasked you with using Excel's Solver to determine the optimal portfolio. The goal is to maximise the reward-to-risk ratio, also known as the Sharpe ratio, which is the ratio of excess return to risk. In this context, excess return is defined as the difference between the portfolio's monthly expected return and the monthly return on a risk-free investment. The risk-free rate (rf) is currently 0.3% per month, based on the 10-year Commonwealth government bond yield, according to the latest data from Bloomberg. The portfolio risk is measured by the standard deviation, which is the square root of the portfolio's variance. Your objective is to use  Solver to optimise the weight allocation to maximise the Sharpe ratio, effectively balancing return with the risk taken. The optimisation problem is outlined below: Report the optimal weights in cells G37:G39, and the Sharpe ratio (sp) in cell G45. Note: if the weight of an asset is negative, leave it as negative. This is known as short-selling. Question 5 [7 marks] In addition to stocks, cryptocurrencies like Bitcoin (BTC) have also generated  impressive returns over the past year. However, due to Bitcoin's significant volatility, Phuc is uncertain whether including BTC in the portfolio will improve the reward-to-risk ratio (i.e. the Sharpe ratio,  Sp ) from Question 4.  Navigate to the  "Q5"  tab  in the spreadsheet and redo the optimisation, this time incorporating Bitcoin alongside GOOGL, TTC, and FSV in the portfolio. Your task is to determine if the inclusion of BTC enhances the reward-to-risk ratio. Report the optimal weights in cells H17:H20, and the Sharpe ratio (Sp ) in cell H26. Answer the following questions in the quiz on Moodle: Compared to the portfolio with 3 stocks in Question 4: •     Does including Bitcoin lower the overall risk of the portfolio? •     Does including Bitcoin improve the reward-to-risk ratio of the portfolio? Note: Please upload the completed Excel file using the link provided in Moodle. Ensure that the file includes all formulas used in matrix calculations [rp, varp, V-1, det(V)]. Failure to upload the Excel file will result in a zero score for this assignment. Submission instructions (1) Answer  submission: Enter  your   responses  via  the  “Excel  Assignment  –  Answer Submission” quiz in Moodle. You are allowed only one attempt to submit the quiz, so please ensure that you are confident in your answers before doing so. (2) Excel  file  submission: Upload  your  completed   Excel  file  through  the  “Excel  file submission”   link   on    Moodle.    Name    your   file    using    the   following    format: FirstName_LastName_StudentID (3) Penalty for incomplete submission: If you upload the Excel file without completing the quiz, a 50% penalty will be applied. (4) Late  submission  penalty:  a  penalty  of  20%  per  day  (or  part  thereof),  including weekends, will be applied for late submissions. The assignment is due by 4pm on Thursday of Week 11 (21/11/2024).

$25.00 View

[SOLVED] STA475 Assignment 3 Fall 2024 Matlab

STA475 Assignment #3 (Fall 2024) Instructions Due date: Friday November 22 at 11:59pm •  No-questions-asked grace period until Monday November 25 at 5pm •  No late submissions will be accepted after this time Where to submit: •  Crowdmark:  Submit your answers to each question in correct space on Crowdmark •  MarkUs  (link:   https://markus.teach.cs.toronto.edu/markus/courses/25):   Submit  the .qmd file with your answers to Q1 to MarkUs - please just save a copy of the ques- tions and enter your answers in the space provided. You don’t have to use the .qmd file to answer the other questions if you don’t want to. •  You can submit as many times as you like before the deadline •  Email submissions will NOT be accepted Other notes: •  For some questions, you will need to use R; please include all code and relevant output in the pdf submission (make sure the code is visible in the pdf and doesn’t run out of the margins).  If the question asks you to answer a question based on R output, make sure your answer is easy for your TA to find (i.e. start with your answer, then have the code and output afterwards for reference).  Your TA should be able to understand your answer without looking at your and output (as appropriate), but may refer to these to better understand what you did. •  While you may discuss questions with your classmates, you MUST submit independent work that you did yourself.  Students submitting identical solutions  (e.g. identical sen- tences, derivations steps, or chunks of code) will be investigated for violations of academic integrity. •  If   you   believe   you’ve   found   a   typo   or   error   in   the   assignment,   please   email [email protected]  so  I  can  look  into  it  and  get  back  to  the  class  as  quickly  as possible. Question 1 [12 points] To answer the questions below, refer to the article titled “Breastfeeding Rates and Related Factors at 1 Year Postpartum in Women with Gestational Diabetes Initially Recruited for a Diabetes Prevention Program”.  In Table 2, two sets of results are included (unadjusted and adjusted models).  Find the relevant section of the article related to this table to understand the difference between these.  For each of the following quantities (i) write the regression equation for the corresponding model, carefully defining any notation you introduce, (ii) fill in the steps to build up your interpretation (the steps are given at the end of this question   for your reference), and (iii) write a clear and complete interpretation of the estimated coefficient in the context of the data. (a) The coefficient for mothers who had breastfeeding troubles (unadjusted model) (b) The coefficient mothers who had breastfeeding troubles (adjusted model) Steps for building up an interpretation Step Step 1:  Identify Identify the target for interpretation Step 2:  Think Is it “better” to have larger or smaller values of the target Step 3:  Identify Identify comparison of interest Step 4:  Analyze Determine direction of effect Step 5:  Write Fill in the basic sentence frame. The                   for individuals with                       is                   times that of individuals with                       . Step 5b  (if applicable):  Rewrite Rewrite Step 5 so that the value reported is bigger than 1 (if it isn’t already) Question 2 [16 points] The data in leukemia.csv contains survival times for patients with leukemia, measured in weeks from diagnosis.  Values of two predictors are also recorded:  white blood cell counts (wbc) at diagnosis and AG, a binary predictor that indicates whether a test related to white blood cells was positive (1) or negative (0). leuk

$25.00 View

[SOLVED] Introduction to finance R

FINAL PROJECT Please submit the entire project including the write-up and excel analysis in a PDF format. 1. Company Profile · Write a brief profile of the company you are assigned to including a description of their products or services and the markets they compete. · Summary of their stock performance in the last quarter versus the market and closest competitors. 2. Financial Analysis · The financial analysis should include, but not be limited to, the preparation of the latest 3-year financial statements (Income Statement, Balance Sheet and Cash Flow Statement) including the Last Twelve Months (LTM). · Provide an explanation of the results year-over-year including ratio and trend analysis. 3. Projections & Valuations · Projections and Valuation Analysis for your assigned public company using Excel. The valuation analysis should calculate the assigned companies Enterprise Value using the following 3 methods of valuation: o Method #1 (current stock price to calculate the EV) o Method #4 (based current trading peer comparable) o Method #6 (DCF Analysis) · In addition to the analysis you need to provide a page or two of discussions on your analysis including the assumptions you used for driving revenues and expenses. · Discussions on the results on your various methods of valuation and recommend if someone should buy, hold, or sell the stock. 4. Technical Analysis · Analyze the stock performance against the market (S&P 500) generating beta coefficient and other standard deviation results using Excel’s regression analysis or calculated manually. ASSIGNED COMPANIES # COMPANY SYMBOL SECTOR INDUSTRY 1 Albany International AIN Consumer Cyclical Textile Manufacturer 2 American Axle & Manufacturing AXL Consumer Cyclical Auto Parts Manufacturer 3 AZEK Corporation AZEK Industrials Building Products 4 Boyd Gaming Corp. BYD Consumer Cyclical Resorts & Casinos 5 Carnival Corporation CCL Consumer Cyclical Cruise Operator 6 Celenese Corporation CE Basic Material Chemicals 7 Century Casino CNTY Consumer Cyclical Resorts & Casinos 8 Choice Hotels Intern'l CHH Consumer Cyclical Lodging 9 Constellation Brands STZ Consumer Defensive Wineries & Distilleries 10 Crimson Wine Group CWGL Consumer Defensive Wineries & Distilleries 11 Darden Restaurants DRI Consumer Cyclical Restaurants 12 Delta Airlines DAL Industrials Airlines 13 Dunkin Brands Group DNKN Consumer Cyclical Coffee shops 14 Flowserve Corporation FLS Industrials Specialty Industrial Machinery 15 HCA Healthcare, Inc. HCA Healthcare Medical Care Facilities 16 Hormal Foods Corporation HRL Consumer Defensive Packaged Foods 17 International Paper IP Consumer Cyclical Paper & Packaging 18 Kraton Corporation KRA Basic Material Chemicals 19 Laureate Education LAUR Consumer Defensive Education & Training Services 20 Marcus Corporation MCS Consumer Cyclical Lodging 21 Marriott International MAR Consumer Cyclical Lodging 22 McDolands Corporation MCD Consumer Cyclical Restaurants 23 Nabors Industries NBR Energy Oil & Gas Drilling 24 Royal Caribbean Cruises Ltd RCL Consumer Cyclical Cruise Operator 25 Select Medical SEM Healthcare Medical Care Facilities 26 Silgan Holdings SLGN Consumer Cyclical Paper & Packaging 27 Starbucks Corporation SBUX Consumer Cyclical Coffee shops 28 Steel Dynamics STLD Basic Material Chemicals 29 TAL Education Group TAL Consumer Defensive Education & Training Services 30 Texas Instrument TXN Technology Semiconductors 31 Texas Roadhouse TXRH Consumer Cyclical Restaurant 32 Tyson Foods TSN Consumer Defensive Packaged Foods 33 U.S. Foods USFD Consumer Defensive Food Distributor 34 United Airlines UAL Industrials Airlines 35 Verso Corporation VRS Basic Material Paper & Packaging 36 Wendy's Company WEN Consumer Cyclical Restaurant 37 Wesco International WCC Industrials Industrial Distribution 38 Wyndham Hotels & Resorts WH Consumer Cyclical Lodging 39 Wynn Resorts WYNN Consumer Cyclical Resorts & Casinos 40 Yum! Brands YUM Consumer Cyclical Restaurants

$25.00 View

[SOLVED] Data Mining Projects List C/C

Data Mining: Projects List September 27th, 2024 1    Problem 1: Vision Transformers - Practical Study The goal of this task  is to  measure  speed-accuracy tradeoffs  for  different  efficient  Transformer- architectures tested on image classification tasks.  Compare regular Vision Transformer (ViT) with Performer applying different attention kernels leveraging deterministic kernel features:  Performer- ReLU and Performer-exp.  Record training, inference time and classification accuracy on eval tests for all three Transformer types. You can test your models on any of the following datasets (or their subsets):  MNIST, CIFAR10, ImageNet, Places365.   Bonus  points:   Can you  design Performer- fθ  variant, where fθ  is a learnable function defining attention kernel and that outperforms both: Performer-ReLU and Performer-exp ? Note: A Performer-f variant is a Transformer replacing regular softmax attention kernel K(q, k) = exp with K(q, k) = f(q)f(k)T . 2    Problem 2:  Performers With Random Features - Practical Study The goal of this task is to quantify the accuracy of the linear low-rank attention models leveraging the mechanism of positive random features.  Note that in this setting, the attention kernel K(q, k) = exp is replaced with K(q, k) = , where ϕ+  is given as: for ω1, ..., ωm iid∼ N (0, IdQK).  Compare the accuracy of the approximation of the attention matrix (no row-normalization required) as a function of the number of random features m applied.  Mea- sure the error as a relative Frobenius error.  Conduct those tests for various distributions of query- and key-vectors and different sequence lengths.  Can you design distributions over queries and keys leading to ”spiky” attention patterns, where the groundtruth attention matrix has a sparse number of large entries and the remaining ones are near-zero ?  Test also the variant where ω 1 , ...,ωm  form. block-orthogonal patterns, where within each dQK-element block the projections are exactly orthog- onal.  How does the quality of the approximation change when you replace Gaussian vectors with Rademacher vectors of entries taken independently at random from the two-element set:  {−1, +1} (with probability 2/1 each) ? 3    Problem 3: Masked Attention Consider a Transformer architecture where entries of the (not row-normalized) attention matrix are of the following form.   where vq , vk  are some feature vectors associated with queries and keys respectively (you can assume that they can be efficiently computed for their corresponding queries/keys).  Design an unbiased randomized mechanism for approximating K(q, k) that can be implemented within the low-rank linear-attention scope.   Note:  You  can consider attention mechanism which is a subject of this problem a masked attention variant, where regular attention kernel exp is modulated by the mask entry of the form. (ReLU(vq)ReLU(vk)T). 4    Problem 4:  Combining different approximators of the attention kernel Consider an attention kernel K : Rd  × Rd  → R and two of its estimators K1( )(x, y) = ϕ(x)ϕ(y)T  and K2( )(x, y) = ψ(x)ψ(y)T  for some  (potentially randomized) maps:  ϕ  : Rd  → Rm  and ψ  : Rd  → Rr . Assume that the kernel acts on vectors taken from the sphere of the given radius R.  Assume that the former estimator is very accurate for estimating small kernel values and that those correspond to large angles between the input vectors (this is the case in particular for the softmax-kernel) and the latter is particularly accurate for estimating large kernel values and that those correspond to small angles between the input vectors (this is the case in particular for the softmax-kernel).  Can you design a third estimator that combines both and is defined as a linear combination of them ?  The goal of the estimator is to combine the best from both worlds, as being particularly accurate for both: small and large kernel values regime.  The estimator should also support low-rank linear-attention computations.  Hint:  Try to define the coefficient of the linear combination as the affine functions of the angle αx,y  ∈ [0,π] between the input vectors x, y and leverage the following observation:   where randomized mapping τ is given as: for ω1, ..., ωm iid∼ N (0, Id). Here m is the hyperparameter of the algorithm.

$25.00 View

[SOLVED] CS 4321/5321 Project 2 Java

CS 4321/5321 Project 2 This project counts for 24% of your grade, of which 6% is for the checkpoint and 18% for the main submission. 1    Goals and important points In Project 1, you developed a lot of functionality. However, the priority was end-to-end evaluation of SQL rather than efficiency. So, you used line-at-a-time I/O. You also used na¨ıve implementations for join (TNLJ, i.e., tuple-nested loop join) and for sort (in-memory sort). Your sort implementation was particularly problematic because it kept unbounded state – the amount of memory required by the operator depended on the size of the input rather than being bounded by a constant. Now, you will address the above shortcomings: • you will move from using line-at-a-time I/O to faster page-at-a-time I/O • you will refactor your code to support different physical implementations for each relational algebra operator (e.g. different join implementations) • you will implement Block Nested Loop Join (BNLJ), External Sort and Sort Merge Join (SMJ) • you will ensure that you have at least one implementation for each operator that does not keep unbounded state • you will do some performance benchmarking of your join implementations against each other You will still support the same subset of SQL as for Project 1, and you should still construct the join tree in the same manner as in Project 1, following the query’s FROM clause. This is a challenging Project, including significant refactoring and nontrivial algorithms to implement. You have a substantial time window to do it, but this window also includes the 4320 prelim and Spring Break.   Thus you need to budget your time wisely. To ensure you get started early, we are requiring a checkpoint submission. The checkpoint requirements are described in Section 5. Note that completing the checkpoint does not mean you have completed half the project - there is much more work to do after the checkpoint. 2    Input and output formats 2.1    Top-level inputs and outputs to your project As in Project 1, we will run your code from the command line by exporting a runnable JAR and typing: java  -jar  db practicum team name 2.jar  inputdir  outputdir  tempdir This means your top-level class has to accept inputdir, outputdir and tempdir as command-line arguments and handle them appropriately. inputdir is the directory where relevant inputs to your program will be found.  This directory will contain: •  a queries .sql file as in Project 1. •  a db subdirectory. This will contain a data subdirectory and and a schema .txt file. The data directory contains files for the relations, with one file per relation as in Project 1, except that the files are now in a binary format described in Section 2.2. The schema .txt file is as in Project 1. •  a file plan builder   config .txt which is the configuration file for the PhysicalPlanBuilder as described in Section 2.3. outputdir is the directory where your program should write output. You may assume this directory will exist. As in Project 1, after running your code, the answer to the ith query (starting at 1) should be found in file outputdir/queryi. tempdir is the temporary directory where your external sort operators should write their “scratch” files, a process described more fully in Section 3.4. Again, you may assume this directory exists. 2.2    Binary File Format This describes the file format for input and output data in Project 2. The motivation for moving to a new file format is that doing line-at-a-time file I/O is inefficient. See http://www.idryman.org/blog/2013/09/28/java-fast-io-using-java-nio-api/ for an interesting performance comparison of various ways to do I/O in Java. In this Project, you will use Java NIO to do page-at-a-time I/O. A “page” for our purposes is a buffer, specifically a ByteBuffer. We will read the file in fixed-sized “chunks” . This means we need a file format where we can read data in fixed-sized chunks rather than in variable-sized lines.  (Lines are variable-sized in that each relation potentially has tuples of a different size/length.) Your textbook in Chapters 9.5-9.7 contains an extensive discussion of file formats for databases. In this project, we can work with a fairly simple format as we are not supporting updates. Every relation, whether it’s a table in the database or a query answer, is stored in a single data file. Each   data file is a sequence of pages.  Every page is 4096 bytes in size. Thus, the first 4096 bytes of the file are    the first page, the next 4096 bytes are the second page, and so on. This also means that every data file is a multiple of 4096 bytes in size - there are no “half pages.” Every page contains two pieces of metadata. The first is the number of attributes of the tuples stored on  the page - we assume all tuples stored on the page have the same number of attributes.  The second piece of metadata is the number of tuples on the page; this is particularly useful information in case the page is not full. The two pieces of metadata, as well as the tuples themselves, are stored as (four-byte) integers.    You may assume all integer values will fall into the int domain (32 bits). You may also assume that all tuples are sufficiently small that you can fit at least one tuple per page. The overall layout of metadata and data on a page is best illustrated by example.  Suppose we have a relation which contains just two four-attribute tuples:  (10, 20, 30, 40) and (50, 60, 70, 80). Then it fits in a single page, set out as in the image below (each “cell” represents a four-byte integer). Any remaining empty space at the end of any page should be filled with zeroes. There are no “half-tuples” possible on a page. If the last tuple does not fit on the page (e.g. the tuple has  3 attributes so we need 12 bytes but the page only has 10 bytes left) it will be placed on the next page. If a page does not have space for any more tuples, it is full.  Any relation that requires multiple pages is stored  so that all pages except possibly the last one are full. Your program must accept data in the binary format and produce outputs (answers) in the binary format. In the sample input and output we have provided, we give you each file in both binary   format and in human-readable Project-2-style. format as a courtesy to aid with debugging, but we will be testing with binary format files only. When working with binary files, be aware that you can open, view and manipulate such files in programs called hex editors (or binary editors). This could be handy for debugging. 2.3    Format of the configuration file for PhysicalPlanBuilder This section explains the format for the configuration file for your PhysicalPlanBuilder. If this is your first time reading the document, we recommend skipping it and returning to it once you have read Section  3 (and actually understand what the PhysicalPlanBuilder is!!). We have placed this section here because once you are familiar with the project, it will be easier to have all format related info in one place. You may assume for this project that the PhysicalPlanBuilder will use the same join implementation for every logical join operator in the logical plan, and the same sort implementation for every logical sort operator in the logical plan.  That is, you do not need the ability to generate physical plans that have a mix of TNLJ, SMJ and BNLJ physical operators, or a mix of BNLJ operators each with a different number of   buffer pages, etc. The configuration file has two lines; the first specfies the join method, the second the sort method. The join method is either 0 for TNLJ, 1 for BNLJ, or 2 for SMJ. If the join method is BNLJ, there is a second integer on the same line specifying the number of buffer pages to be used. The sort method is either 0 for in-memory sort, or 1 for external sort. If the sort method is external sort, there is a second integer on the same line specifying the number of buffer pages to be used in the sort. This will always be at least 3. For example the file: 0 0 Specifies that you want TNLJ and in-memory sort, i.e. the Project 1 na¨ıve implementation. The file 1  5 1  4 Specifies you want BNLJ with 5-pages outer relation buffers, and external sort with 4-page sort buffers. 3    Implementation instructions First, some general remarks to consider as you begin the implementation. You will begin with substantial refactoring of your Project 1 code; save snapshots of your code at intermediate stages in case you mess up (you can do this by creating different branches using git). After each refactoring, be sure to test your code to verify you haven’t broken any “old” functionality. In this Project, your codebase will expand considerably. Do not skimp on comments and on setting up a   good debugging infrastructure as you go; it will be particularly important to stay on top of your codebase and communicate with your partner(s) about APIs. You will be working with bigger relations than in Project 1, so if you are relying on console output for debugging, you may want to move to a setup where your debugging statements get logged to a file instead. This avoids the problem where you have so many things getting output to the console that the first lines get truncated. One simple way to do file-based logging is to create a Logger class that uses the singleton pattern, that way every component in your code can call the logger as needed. You should consider writing yourself a random data generator to test your code. This doesn’t have to be fancy; it can create tuples by generating each attribute as a random integer in a specified range. The range you allow for attributes will dictate how many tuples “match” in joins and selections, so you should probably have that as a configurable setting in your data generator. For example, if you generate two relations with 5000 tuples each, and each attribute has values in the range 0 to 100, probably a lot of tuples will match if you try to join these two relations. If you generate the attribute values in the range from 0 to 10000, far fewer tuples will match. Finally, be aware that once you are done with the two refactoring steps (Sections 3.1 and 3.2), the remaining three tasks can be completed in parallel if you are looking to split up work between partners. BNLJ and external sort can definitely be developed independently; SMJ requires a sorting implementation, but you can use your in-memory sort from Project 1 until your external sort is ready. Also, completing Sections 3.1 and 3.2 constitutes the requirements for the project checkpoint. All of these are great reasons to complete this portion of the implementation ASAP. 3.1    Refactor to use the new file format Your first task is to refactor your Project 1 code to use the new binary file format for input and output. You are required to use Java NIO for this. If you are unfamiliar with Java NIO, start by reading this tutorial: http://www.ibm.com/developerworks/java/tutorials/j-nio/j-nio .html and looking at the documentation. We recommend using ByteBuffers. They provide handy getInt and putInt methods that relieve you of the need to write code for serializing an integer into bytes. We recommend doing this refactoring by setting up a layer of abstraction so that most of your code does not need to know about file I/O, or even what file format is being used. For example, you can create TupleReader and TupleWriter interfaces. TupleReader has a method to read the next tuple (and probably some bookkeeping methods such as close, reset etc). TupleWriter has a method to write a tuple. Then the rest of your code can just create TupleReaders/TupleWriters as needed; for example, your file scan operator can create a TupleReader and grab tuples from that instead of from the file directly. The logic for getting tuples from a page or writing them to a page is then encapsulated in concrete implementations of TupleReader and TupleWriter. For the new file format, the TupleReader you implement will read the file one “page” at a time – i.e. it will have a running ByteBuffer which it will fill using read calls to an appropriate FileChannel.  Then it can extract specific tuples from the page as needed when someone requests the next tuple. The writer’s behavior. will be symmetric - it can buffer the tuples in memory until it has a full page and then flush (write) the page. It is a good idea to implement TupleReaders and TupleWriters for both the new binary format and the Project 1 human-readable format. That also allows you to write some helpful utilities for debugging, such as a converter between the two formats. Be aware of a couple of quirks of Java NIO we discovered last semester: First, some students reported strange behavior. when using the relative get/put methods for ByteBuffer, i.e. getInt() and putInt(int  value). If you encounter issues, try the absolute methods getInt(int index) and putInt(int  index,  int  value), which explicitly specify the desired location in the buffer. Second, be aware that Buffer.clear() does not do what you might think it does.  The official documentation at http://docs.oracle.com/javase/8/docs/api/java/nio/Buffer.html#clear-- , tells us the surprising fact that:  “This method does not actually erase the data in the buffer, but it is named as if it did because it will most often be used in situations in which that might as well be the case.” The above affects you because it could mess up your padding with zeroes at the end of each page. Specifically, if you reuse the same ByteBuffer for each page when writing, even if you clear() it between  pages, this does not refill the buffer with zeroes. If your second-to-last page is full and your last page is not full, and you only do a clear() between pages, the “empty” final portion of the last page you write will actually contain some “garbage” values that are left over from the second-to-last page. The moral of the story is that you have to do something more than clear() to ensure correct padding with zeroes. Once you have sorted out your I/O, this is a great time to write a random data generator. You may also want to write a sorting utility that takes in a file (in either format), sorts the tuples in memory using Collections .sort and writes out a sorted file. This will be very handy once you start implementing different join algorithms - when run on the same data, they may output tuples in different ordrers. If you have a sorting utility, you can sort the outputs afterwards and compare the files to make  sure your SMJ is actually outputting the same results as your BNLJ. Finally, this is as good a time as any to introduce timing functionality into your code. When your top-level class has a query plan and is ready to call dump() to write the results, call System.currentTimeMillis()  before and after the dump() to get a measure of the elapsed time. You will need this functionality for the    performance benchmarking (Section 3.6) and it is handy to implement it now. Generate yourself some relations with, say, 5000 or 10000 tuples and see how long it takes to run your queries, as a ballpark figure. 3.2    Refactor to use both logical and physical query plans Your Project 1 code generated the query plan directly from the Statement objects that JSqlParser produced. You will now refactor this into a two-stage process: the generation of a logical  query plan from    the Statement, and the generation of a physical  query plan from the logical query plan. The logical query  plan will just be a relational algebra tree; the physical query plan will contain concrete implementations for each operator, such as SMJ or BNLJ. Thus, your overall workflow in your top-level class will be: • read a query from a file and parse it using JSqlParser •  convert the Statement you obtained into a logical query plan •  convert the logical query plan into a physical query plan • evaluate, i.e., call dump() on the physical query plan, including timing the evaluation This refactoring is necessary to separate two different concerns: • building a relational algebra tree that represents the query at a mathematical level, and • translating the relational algebra tree into some code that will actually run and spit out tuples. This separation makes it easier to add functionality that pertains to only one of these - for example, in Project 5 you will be optimizing the logical query plan.  If you are pushing selections/projections past joins, a lot of that can and should be done before you choose a concrete join implementation. This separation also allows you flexibility in modifying or enhancing the logical plan as you build the physical plan. Notably, if you want to use SMJ, you can insert sort operators on both inputs to the join while constructing the physical query plan. That way the join operator itself only has to perform. the merge. Of course if you are not  using SMJ, it doesn’t make sense to insert these extra sort operators, so you want to defer the decision until you know which join implementation you are using. To refactor, start by creating separate packages with classes for logical and physical operators. Your old Project 1 operators will basically become your physical operators, and you will add a new package with logical operators. A logical operator does not need to store a lot of info; depending on the type of operator, it may need to know things such as the join condition, selection condition, sort order, etc. Basically, think  about writing your query in relational algebra by hand as you have done in 4320 homeworks; if the information appears in your relational algebra translation, it probably needs to be in the logical operators. On the other hand, the logical operators do not need getNextTuple() or reset() methods since they will never actually “run” . That logic definitely belongs in physical operators. Once you have both physical and logical operators, you need code to translate a logical plan to a physical plan. Fortunately, you have extensive experience with the visitor pattern, so this should not be difficult conceptually. Write a visitor class that recursively walks your logical plan and builds up a corresponding physical plan. In the remainder of these instructions we will call this visitor the PhysicalPlanBuilder, but you may name it what you like.  For now the PhysicalPlanBuilder code will not be very  “clever”, but this will soon change. For example, once you have a BNLJ implementation available, you can set your PhysicalPlanBuilder to convert the logical join operator into either a tuple-nested loop join or BLNJ physical operator. Once you have a SMJ implementation, your PhysicalPlanBuilder will actually insert physical sort operators as well as a physical SMJ operator. 3.3    Implement BNLJ After substantial refactoring, you are now ready to implement a new join algorithm:  block-nested loop join. 3.3.1    Implement the BNLJ physical operator You need to implement a new physical operator to compute joins using the BNLJ algorithm. This is described at a high level in your textbook, pp. 455-456. The basic idea is to read the outer relation one block at a time into a buffer, and execute the following logic: procedure BNLJ(outer R, inner S) for each block B of R do for each tuple s in S do for each tuple r in B do if r and s satisfy join condition then add new tuple formed from r and s to result Of course this is the logic to compute the entire result, whereas in the iterator model you need to output   one tuple at a time. You will therefore have to restructure the above triple-nested-loop structure for the    getNextTuple() method, and your operator will need to keep some state between invocations so it knows where to resume. You already had to something like this for the tuple nested loop join, but it will be a little more involved here due to three nested loops. You will notice the description in your textbook talks about building a hashtable for each block of the outer. This is a good refinement for equijoins, but requires careful handling if the join condition is not equality. Yes, most real-world joins are equijoins, but your BNLJ algorithm should support all join conditions as specified in the Project 1 description. Thus if you choose to implement a hashtable, you need to do something to handle non-equality join conditions as well. Alternately you may omit the hashtable and iterate over the entire buffer for each tuple of the inner relation as suggested by the above pseudocode. As regards implementing the buffer, we are going to “cheat” a bit. Our buffer pages will not directly  correspond to the file pages from Section 2.2. This is because we are not building a full-fledged buffer manager, so there is no reason to have uniform. treatment for the buffers in your TupleReader/TupleWriter and for the buffers in the BNLJ. The buffer in your BNLJ can just be a data structure that contains Tuples. However, the size of this buffer should be configurable and this should be something that can be passed in into the operator’s constructor. To maintain some residual consistency across the implementation, we require the ability to set the size of the buffer in pages, where a page is 4096 bytes. Your BNLJ constructor can calculate how many tuples of the outer relation will fit in a page, assuming each tuple has size 4* the    number of attributes it contains.  (Note the discrepancy with respect to the page format from Section 2.2 where eight bytes are used up for metadata.) The BNLJ operator can then internally use a Tuple buffer with the appropriate maximum size in tuples. Note that your textbook talks about reserving two of the pages for input and output of the inner relation; we do not need to worry about those here because these two buffers are maintained in the TupleReader and TupleWriter. Thus, the buffer size parameter for your BNLJ should just be the number of “pages” to devote to each block of the outer relation. To read a block of the outer relation, the BNLJ operator should repeatedly call getNextTuple() on the outer child until the buffer is filled. You may wonder if this means that we are regressing to tuple-at-a-time I/O. However, note that if the outer child is a base table and you are using a page-at-a-time TupleReader, you are actually doing page-at-a-time file I/O even though it “looks like” the BNLJ is pulling the tuples one at a time. Of course if the outer child is not a base table, but another operator, the BNLJ needs to pull tuples one at a time anyway. 3.3.2    Integrate your BNLJ operator with the rest of your code Your PhysicalPlanBuilder should set the number of pages to be used for the BNLJ buffer when it creates a BNLJ physical operator. This means you need a way to specify: • which physical implementation should be used for the logical join operator (TNLJ or BNLJ) •  if the physical implementation desired is BNLJ, how many buffer pages to use. You should specify the join algorithm desired and the number of buffer pages in a  configuration file. When your PhysicalPlanBuilder is constructed, it should read this configuration file and set appropriate fields  internally so it can do the desired thing during plan construction. The format for the configuration file is    described in Section 2.3. Note that the above implies you should hang on to your Project 1 TNLJ implementation. You will be  comparing your other join implementations against it, both to ascertain correctness and to benchmark running times as explained in Section 3.6. In fact, this is a good time to run some queries and compare the execution times with BNLJ versus TNLJ, for various buffer sizes in the BNLJ. The results of the queries should obviously be the same, although they may be ordered differently.

$25.00 View

[SOLVED] FINT B337F Autumn 2024 Assignment Java

FINT B337F (Autumn 2024) Assignment Weighting: 40% of final course score Deadline: Nov 18, 2024 In this individual assignment, you need to develop an algorithm to trade a given stock/index that can demonstrate positive return and an outperformance versus its buy and hold return over a defined period of time and aim to achieve comparable results in the future. By the deadline, submit a complete project report (max 1000 words) with the topics below. Market Microstructure Analysis: (10 marks) l  Stocks/indices analysed l  Stock/index chosen l  Opportunity (Supply/Demand imbalance) identified l  Opportunity explained (why it exists) Design: (10 marks) l  Strategies defined l  Paper portfolio size (in $ terms) assumed l  Turnover assumptions (on an annual basis in relation to portfolio size) l  Trading costs assumptions (in % terms) l  Market impact (implementation shortfalls) assumptions (in % terms) l  Expected outperformance vs buy and hold over a period of time l  Expected maximum underperformance (peak to trough) at any point in time Implementation: (10 marks) l  Source code l  Types of data used l  Period of data used l  In sample (training) and out of sample (testing) period chosen Risk Management: (5 marks) l  Possible human interventions l  Risk management policies Evaluation: (5 marks) l  In sample versus out of sample results l  Turnover assumed vs incurred l  Maximum underperformance assumed vs incurred l  Risk identified and proposed fine-tuning After submitting your report, you will be required to give an oral presentation in English to authenticate your  work. Please ensure that you upload your PowerPoint file to the Online Learning Environment (OLE) before the presentation date. Following your presentation, there will be a question and answer (Q&A) session. You will be informed of the presentation date in due course.

$25.00 View