Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] WM9A9-15 Big Data Analytics Optimisation 24/25SQL

Module title & code WM9A9-15 Big Data, Analytics & Optimisation 24/25 Assessment type Essay Weighting of mark 70% Assignment brief You have been hired by a chained-brand hotel group called Stellar Hotel Group to provide consultancy in big data and analytics technology. The company owns and operates six midscale hotels in the UK. These hotels are: · Stellar Resort Hotel - Cornwall · Stellar City Hotel - London · Stellar Coastal Hotel - Brighton · Stellar Highland Hotel - Inverness · Stellar Lakeside Hotel - Windermere · Stellar Urban Hotel - Manchester The company is based in London, while each hotel has its own independent operation team on site. Each hotel uses its own Property Management Systems (PMS), which manages daily operations such as bookings, check-ins/check-outs, and billing. These systems store data locally, making it challenging to gain a unified view of operations across all properties. They are also using HubSpot for managing customer interactions and data. Google Analytics is being used now in the company for tracking the performance of marketing campaigns, but it is not integrated with the booking and CRM system. Overall, data is currently stored in disparate systems, and there is limited use of advanced analytics. Basic reporting is done using tools like Excel and basic SQL queries, which are insufficient for deriving deep insights. Under such context, the company is now facing some challenges. The company collects customer data separately at each hotel, resulting in a fragmented view of guest preferences and behaviour. This lack of integration prevents the company from fully understanding their customers, leading to missed opportunities for effective marketing and consistent customer experience. The company also struggles to optimise the room pricing due to limited analytics capability. Room rates are often set based on historical data. In addition, the data is not well tracked and analysed within the company. There is lack of consistent way of data tracking and analysis, which is leading to inefficient allocation of resources and budgets. Given the challenges, the company is seeking opportunities of applying big data analytics technology. As a consultant, you are expected to provide a report with solutions for the application of big data technologies in the company. In particular, your report should include: · A critical evaluation of the current data technology setting of the company and the new opportunities of applying big data analytics technologies in this company · Recommendation(s) of big data analytics technology for addressing the existing issue(s) with consideration of potential challenges/risks associated with extended use of big data and analytics technology by the company. Some guidance should be provided to mitigate the risks while implementing the new technologies within the company · A dashboard solution for the company to track the data more efficiently and support decision making. You are required to use given dataset to build a demo dashboard, along with some explanation and evaluation. Here are some instructions for providing your dashboard solution: § A clear target of your dashboard § A demonstration of your dashboard on how it follows best practices of data visualisation § An evaluation of your dashboard on how it would effectively help tracking data and support decision-making within the organisation § Any plan for future improvement (i.e. getting more data or deployment plan) You are allowed to make more assumptions about the company as you like to help you answer the questions more realistically. Please state clearly what your assumptions are, and they should be integrated well with the given information. Additional document along with dataset for dashboard design will be accessible from the module's Moodle site. References from both academic and commercial sources should be included. Word count   Suggested word count ~ 2,800 words for the main body of content, not including executive summary, table of contents and reference list. Plus or minus 10% of the word limit is acceptable. Module learning outcomes (numbered) L1. Demonstrate a comprehensive understanding of the key differences between Big Data technologies and analysis methods and traditional approaches. L2. Evaluate real-world scenarios and devise appropriate analytical solutions. L3. Demonstrate a comprehensive understanding of the core concepts of visual communication and data visualisation. L4. Collaboratively analyse digital business requirements and practically implement analytics and optimisation techniques in real-world settings Learning outcomes assessed in this assessment (numbered) L1, L2, L3   Marking guidelines See below Academic guidance resources Time will be reserved during the module to review the module assignment and for students to raise questions, and students are permitted to ask the Module Leader questions about the assignment up to the submission deadline.  

$25.00 View

[SOLVED] FMPH 223 Longitudinal Data Analysis - Spring 2024 Final Exam Web

FMPH 223 Longitudinal Data Analysis - Spring 2024 Final Exam You have 3 hours to complete the test. There are ten questions for a total of 100 points. Each question is worth 10 points. Please return the exam as a .pdf document.  No particular formatting is required. In a randomized,  double-blind,  parallel-group,  multicenter  study  comparing two oral anti-fungal treatments  (Itraconazole  and Terbinafine) for toenail infection, patients were evaluated for the degree of onycholysis (separation of the nail plate from the nail bed) at baseline (week 0) and at weeks 4, 8, 12, 24, 36, and 48 thereafter. The onycholysis outcome variable is binary ("none or mild" versus "moderate or severe").  This variable was evaluated on 294 patients, for a total of 1908 measurements.  The main objective of the analyses is to compare the effects of the two treatments on changes in the probability of moderate or severe onycholysis over the duration of the study. The data are in the file  Toenail.dat.   Each  row of the data set contains the following five variables:  ID  = patient ID;  Y  = binary onycholysis response, 0 = none or mild,  1 = moderate or severe;  Treatment = 1 for Terbinafine (novel drug), 0 for Itraconazole (standard treatment); Month = the exact timing of measurements in months; Index = visit number for each participant, 1-7, corresponding to scheduled visits at 0, 4, 8, 12, 24, 36, and 48 weeks. (Some participants have missing visits.) Figure 1 displays the observed proportions and log-odds of onycholysis by study week, for each of the two treatment arms. Note:  the first 36 rows in Toenail.dat include study and data description, and they should be omitted when reading in the dataset - use read . table( . . . ,  skip=36). Figure 1:  Observed  (i) proportions and  (ii) log-odds of onycholysis by study week for the two treatment arms in the Onycholysis randomized clinical trial. 1.  Consider a generalized linear mixed effects model, with randomly varying intercepts, for the patient-specific log odds of moderate or severe onycholysis.  Fit a model with linear trends for the log-odds over time, with common intercept for the two treatment groups, but different slopes: (M1)       logit{E(Yij |bi )} = (β1 + bi ) + β2 Monthij  + β3 Treatmenti  × Monthij , where, given bi , Yij   is assumed to have a Bernoulli distribution.  Assume that bi   ~ N(0,σb(2)). In the model equation M1 above, what is the interpretation of parameters β2  and β3 ? 2.  Based on the results of the M1 model fit, is Itraconazole an effective treatment of onycholysis? Summarize the efficacy of this drug treatment. 3.  Based on this model, is Terbinafine effective in treating onycholysis?  Summarize the effects of this drug. 4. Figure 1 suggests that the logistic GLME with linear time trend may not be correctly describing the time effect.  Consider models M2 and M3, that expand model M1 in the following ways : (M2) A model including quadratic polynomial time trends (M3) A model including cubic polynomial time trends All models include subject-specific random intercepts and suitable time  × treatment interactions. Show that model M2 provides a better fit to the data than the linear time trend model. Use the likelihood ratio test or the Wald test.  State the statistical hypothesis being tested. 5.  Show that model M3 does not provide a better fit than the quadratic time trend model M2.  Use the likelihood ratio test or the Wald test.  State the statistical hypothesis being tested. 6.  Using M2, is there a significant difference in efficacy between the two treatment arms? State and test an appropriate statistical hypothesis. 7.  Figure 1 suggests that the time trend may be suitably modeled by the following model M4, using a linear spline time trend, with a knot at 6 months (approximately week 24): (M4)      logit{E(Yij |bi )} = (β1 + bi ) + β2 Monthij  + β3 (Monthij  − 6)+   +β4 Treatmenti  × Monthij  + β5 Treatmenti  × (Monthij  − 6)+ , where x+  = max(x,0) for any real number x; As before, bi  ~ N(0,σb(2)) are independent subject-specific random intercepts. Show that model M4 provides a better fit to the data than models M1 and M2. 8.  Based on M4, is there a significant difference in efficacy between the two treatment arms? State and test an appropriate statistical hypothesis. 9.  Based on M4, estimate the following quantities, including 95% confidence intervals: (a)  ORI , the odds ratio of onycholysis at one year versus baseline, for a participant in the Itraconazole arm. (b)  ORT , the odds ratio of onycholysis at one year versus baseline, for a participant in the Terbinafine arm. (c) Treatment effect at one year of treatment, defined as the ratio of odds ratios ROR = ORT /ORI . 10.  Briefly summarize the results of the analysis of the data from the onycholysis ran- domized clinical trial, conducted at questions 1-3. Mention at least two strengths and at least two limitations of the conclusions regarding the efficacy of Terbinafine versus Itraconazole from this analysis.

$25.00 View

[SOLVED] EE6407 Assignment 2

Assignment 2 Please submit your Assignment 1 and Assignment 2 (compile into one pdf file) to submission portal of EE6407 assignments  1&2 under the folder of Assignments by 23 March 2025. Question: Given a 2-class pattern classification problem, where the training data is given in data_train, and the class label is given in the   label_train. The test data is given in data_test (23 samples). (1) Train  a  Fisher  linear  discriminant  classifier:  detail  the  learning process,  and give the values of weight vector w and bias term Wo , and the decision rule (2) Use the trained classifier in part (1) to predict the class label of the test data: give the class labels of all the testing samples .

$25.00 View

[SOLVED] CS918 Sentiment Classification for Social Media Assignment Two 2024-25

Assignment Two Sentiment Classification for Social Media CS918: 2024-25 Submission:  12 pm (midday) Thursday 27 March 2025 Notes a)   This exercise will contribute towards 30% of your overall mark. b)   Submission should be made on Tabula and should include a Python code written in Juypter Notebook and a report of 3-5 pages summarising the techniques and features you have used for the classification, as well as the performance results. c)    You can use any Python libraries you like. Re-use of existing sentiment classifiers will receive lower marks. The idea is for you to build your own sentiment classifier. Topic: Building a sentiment classifier for Twitter/X SemEval competitions involve addressing different challenges pertaining to the extraction of meaning from text (semantics). The  organisers of those competitions provide a dataset and a task, so that different participants can develop their system. In this exercise, we will focus on the Task 4 of Semeval 2017 (http://alt.qcri.org/semeval2017/task4/). We will focus particularly on Subtask A, i.e. classifying the overall sentiment of a tweet as positive, negative or neutral. As part of the classification task, you will need to preprocess the tweets. You are allowed (and in  fact encouraged) to reuse and adapt the preprocessing code you developed for Coursework 1. You may want to tweak your preprocessing code to deal with particularities of tweets, e.g. #hashtags or @user mentions. You are requested to produce a standalone Juypter Notebook that somebody else could run on their computer, with the only requirement  of having the SemEval data downloaded. Don’t produce a Juypter Notebook that runs on some preprocessed files that only you have, as we will not be able to run that. Exercise Guidelines •    Data: The training, development and test sets can be downloaded from the module website (semeval-tweets.tar.bz2). This  compressed archive includes 5 files, one that is used for training (twitter-training-data.txt) another one for development (twitter-dev-data.txt) and another 3 that are used as different subsets for testing (twitter-test[1-3].txt). You may use the development set as the test set while you are developing your classifier, so that you tweak your classifiers and features; the development set can also be useful to compute hyperparameters, where needed. The files are formatted as TSV (tab-separated-values), with one tweet per row that includes the following values: tweet-idsentimenttweet-text where sentiment is one of {positive, negative, neutral}. The tweet IDs will be used as unique identifiers to build a Python dictionary with the predictions of your classifiers, e.g.: predictions  =  {‘163361196206957578’ : ‘positive’, ‘768006053969268950’ : ‘negative’, …} •    Classifier: You are  requested to develop 3 classifiers that learn from the training data and test on each of the 3 test sets separately (i.e. evaluating on 3 different sets). You are given the skeleton of the code (sentiment-classifier.tar.bz2), with evaluation script included, which will help you develop your system in a way that we will then be able to run on our computers. Evaluation on different tests allows you to generalise your results. You may achieve an improvement over a particular test set just by chance (e.g. overfitting), but improvements over multiple test sets make it more likely to be a significant improvement. You should develop at least 3 different classifiers, which you will then present and compare in your report. Please develop at least 2 classifiers based on traditional machine learning methods such as MaxEnt, SVM or Naïve Bayes trained on different sets of features (you could use Scikit-learn library). Then, train another classifier based on the LSTM using PyTorch (and optionally the torchtext library) by following the steps below: a)     Download the GloVe word embeddings and map each word in the dataset into its pre-trained GloVe word embedding. First go tohttps://nlp.stanford.edu/projects/glove/and download the pre-trained embeddings from 2014 English Wikipedia into the "data" directory. It's a 822MB zip file named glove.6B.zip,  containing 100-dimensional embedding vectors for 400,000 words (or non-word tokens). Un-zip it. Parse the unzipped file (it's a txt file) to build an index mapping words (as strings) to their vector representation (as number vectors). Build an embedding matrix that will be loaded into an Embedding layer later. It must be a matrix of shape (max_words, embedding_dim), where each entry i contains the embedding_dim-dimensional vector for the word of index i in our reference word index (built during tokenization). Note that the index 0 is not supposed to stand for any word or token -- it's a placeholder. b)     Build and train a neural model built on LSTM. Define a model which contains an Embedding Layer with maximum number of tokens to be 5,000 and embedding dimensionality as 100. Initialise the Embedding Layer with the pre-trained GloVe word vectors. You need to determine the maximum length of each document. Add an LSTM layer and add a Linear Layer which is the classifier. Train the basic model with an 'Adam' optimiser. You need to freeze the embedding layer by setting its weight.requires_grad attribute to False so that its weights will not be updated during training. •    Evaluation:  You will compute and output the macroaveraged F1 score of your classifier for the positive and negative classes over the 3 test sets. An evaluation script is provided which has to be used in the skeleton code provided. This evaluation script produces the macroaveraged F1 score you will need to use. You can also compute a confusion matrix, which will help you identify where your classifier can be improved as part of the error analysis. If you perform error analysis that has led to improvements in the classifier, this should be described in the report. To read more about the task and how others tackled it, see the task paper: http://alt.qcri.org/semeval2017/task4/data/uploads/semeval2017-task4.pdf Marking will be based on: a.     Your performance on the task: good and consistent performance across the test sets. While you are given 3 test sets, we will be running your code in 5 test sets to assess its generalisability. Therefore, making sure that your code runs is very important. [25 marks] b.     Clarity of the report. [20 marks] c.      Producing runnable, standalone code. [20 marks] d.     Innovation in the use of features and deep learning architectures (e.g.: BERT, prompting strategies with language models, etc.). [25 marks] e.     Methodological innovation. [10 marks] Total: 100 marks

$25.00 View

[SOLVED] LLP714 Corporate Social Responsibility Second Term 2024-2025 PRACTICE EXAM

LLP714 Corporate Social Responsibility Second Term 2024-2025 PRACTICE EXAM Pick one of the following two question sets, and write an essay of not more than 2,500 words to answer it. Please clearly indicate in your answer which of the two asks you answer to upfront. Consider lecture materials, readings, tutorials and group presentations in your answer. You must write the essay by yourself.  Remember to use in-text referencing. Please note the guidance on Learn concerning academic misconduct, plagiarism, and the use of generave AI. The exam must be submited via LEARN by the indicated deadline, and will be worth 70% of your final grade. QuesⅥon #1: The business case for Corporate Social Responsibility (CSR) suggests that firms can achieve financial success while addressing social and environmental concerns. Using only core readings from the course, compare how this argument is developed by Barnet (2019) and Aguilera et al. (2007). How do these authors explain the mechanisms through which CSR may create value for  firms?  Drawing on Jackson et al. (2014), how does corporate irresponsibility complicate the business case for CSR?  Using examples from the class, analyze a case where CSR initiatives were effective and one where they failed, illustrating both the strengths and limitations of the business case argument. QuesⅥon #2: Corporate Social Responsibility (CSR) is shaped by different institutional environments, influencing how companies engage with social and environmental issues. Using only core readings from the course, compare how institutions shape CSR in different countries, drawing on Maten & Moon (2008) and Tarim et al. (2021). How do explicit and implicit CSR approaches vary across institutional contexts, and what role do regulatory and market factors play in shaping corporate behavior?  Drawing on Reinecke & Donaghey (2018), how do contexts of weak state regulation complicate the adoption and enforcement of CSR? Provide examples from the class to illustrate your arguments.

$25.00 View

[SOLVED] CS 338 Winter 2025 Assignment 3

CS 338 - Winter 2025 Assignment 3 Introduction You have been asked by the Waterloo Recreational Sports Association (WRSA) to help them with  their database design. The WRSA has come up with a database specification and included it in the next section. You will be required to create an ER model based on the specification. Once you have created the ER model you will map it to a relational model using the steps from lecture. Please upload your answers as a single PDF file. Refer to Learn for the deadline. Database specification The WRSA oversees several sports. Each sport has a unique name, the number of referees required to referee matches of that sport, and a list of locations where the sport can be played. For simplicity, each location is a single string. The WRSA is comprised of many teams. Each team plays exactly one sport, and every sport has some team associated with it. Each team has a unique name. A team may choose to have one or more coaches.  However, since the WRSA is recreational, teams are not required to have a coach.  Coaches have a name, address, phone number, and certification date. Players who are registered in the WRSA are members of teams. Each player can be a member of multiple teams, but they must be a member of at least one team. Players have a name, age, phone number, and address. Coaches and players are covered by the WRSA’s insurance and each have an insurance number. Each player must have at least one emergency contact. The emergency contacts each have a name, address, relation to the player, and phone number. A player cannot be a coach and cannot have two emergency contacts with the same name. The WRSA selects several players to act as mascots. Each mascot must have a nickname. Every team plays in matches with other teams. A match is given a unique match number, a location, and a date and time. Matches are always between two diferent teams and the outcomes are not recorded. Each match is refereed by other players who are not involved in that specific match. Part I: ER model Create a ER model based on the database description.  Make a registrant entity that is a superclass of players and coaches.  Give registrants unique IDs. For each entity you create, ask yourself: • what are its attributes? • what attributes can make up its key? • is the entity weak? • is the entity a subclass of another entity? • what relations does it have with other entities? •  are any attributes multivalued? For each relation you create, ask yourself: • what is its cardinality ratio? • what are the participation constraints? • is it an identifying relationship? Come up with at least one derived attribute and include it in your ER model. Part II: Relational model Create a relational model based on your ER model.  Make sure to follow the six steps outlined in your lecture notes.

$25.00 View

[SOLVED] 21797 Strategic Supply Chain Management

Midterm Exam 1. What do we mean by ‘the critical window of exposure’ of a dietary factor in relation to a disease outcome? (2pts) 2. Why may a blood biomarker of a nutrient not be closely correlated to the usual intake of said nutrient? Explain at least 4 reasons. (4pts) 3. Factor analysis separates individuals into mutually exclusive groups based on dietary intake. True or false? (1pt) 4. Use the table below to answer the following questions: a) What proportion (%) of participants were NOT misclassified when using vitamin C from diet only? Show your work. (1pt) b) Based on this information, if you were to analyze the association between vitamin C and cardiovascular disease using data from this sample, would it be more appropriate to conduct the analyses using measures of vitamin C from diet + supplements or using vitamin C from diet only? Why or why not? Justify your answer. (2pts) 5. The table below shows the mean Healthy Eating Index 2010 scores for participants and nonparticipants in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) before and after the WIC food package was revised in 2009. Use the table to answer the following questions: a. What are the mean Healthy Eating Index 2010 scores for WIC participants before and after the WIC food package change? (1pt) b. What do these scores tell you about the diet quality of WIC participants after the WIC food package change relative to before the change? (2pts) 6.  Answer the following questions for each type of observational study design listed below. (5pts) 7. You are investigating the association between childhood consumption of dairy and a rare autoimmune disorder among adults. Which study design AND dietary assessment method would you use to conduct this research? Justify your answer. (3pts) 8. Do the following scenarios represent random or systematic error? Justify your answers. (3pts) a. Height was consistently 3cm higher than the true height: b. A research assistant took 3 heights, and they were slightly different: c. FFQ did not differentiate between citrus fruits and other fruits: 9. Which of the following statements are true? Select all that apply. (1pt) a. Systematic error reduces validity b. Systematic error reduces precision c. Random error reduces validity d. Random error reduces precision 10. The repeated measures taken on a sphygmomanometer (blood pressure monitor) are exactly the same within a period of five minutes. Does this information mean that the tool is precise or that it is accurate? (2pts) 11. Several research questions are listed below. For each of the questions, is it more appropriate to use a biomarker or dietary intake to assess the dietary exposure of interest? Justify your answer. If you choose a biomarker, explain why a biomarker is appropriate. If you choose dietary intake, explain why dietary intake is appropriate. (3pts) a. What is the association between fruit and vegetable intake and risk of stroke? b. What is the association between intake of calcium and the development of type 2 diabetes in adolescents? c. What is the association between maternal vitamin D status during pregnancy and offspring birthweight? 12. What are two strategies to reduce random error (and/or the effect of random error) in a study examining diet-disease associations? Explain your answer. (4pts) 13. Use the table below (Table 5) from the Women’s Health Initiative Observational Study to answer the following questions: a. What is the independent variable? (1pt) b. What is the dependent variable? (1pt) c. The range of caffeine intake in quintile 5 is 315-794 mg. Assuming that 794 mg caffeine intake is considered an outlier, will it influence the results in this analysis? Justify your answer. (2pts) d. This study found no significant association between the dietary exposure and health outcome. Assuming that there is a true association, what are some potential reasons that a significant association was not detected? Discuss at least 4 reasons. (4pts) 14. Provide one reason that researchers exclude participants with extreme (very high or very low) energy intakes when studying diet-disease associations in nutritional epidemiology? (2pts) 15. Use the table below (table 2) from the Nurses’ Health Study to answer the following questions: a. Looking at model 3, what is the association between each dietary exposure and type 2 diabetes, comparing the results of quintile 5 versus quintile 1? Using full sentences, please comment on the direction and magnitude of any significant associations you identify (for non-significant associations, you can simply state that there is no association). (6pts) b. Is the p-for-trend significant for model 3 for any of the dietary exposures? What does this indicate? (3pts) 16. Use the table below (Table 3) from the Singapore Chinese Health Study to answer the following questions: a. List the different approaches used to measure diet quality in this analysis. (3pts) b. The approaches used to measure diet quality in this analysis are data driven. True or false? (1pt) c. Provide at least one rationale for conducting an analysis using diet quality instead of a single nutrient/food group analytical approach. (2pts) 17. Use the table below (Table 3) from the Nurses’ Health Study to answer the following questions: a. Is it correct to control for total energy intake in this analysis? Why or why not? Justify your answer. (2pts) b. Looking at model 3, what is the association between each dietary exposure and ischemic stroke, comparing the results of quintile 5 versus quintile 1? Using full sentences, please comment on the direction and magnitude of any significant associations you identify (for non-significant associations, you can simply state that there is no association). (4pts) c. For each dietary exposure, is the p-for-trend significant for model 3? What does this indicate? (3pts) d. Based on these findings, what public health recommendations would you make regarding grain consumption in the context of ischemic stroke among women? Justify your answer. Please do not simply restate the results, I want to know what you think the public health implications of the results should be. (2pts)  

$25.00 View

[SOLVED] EE 5711 Power Electronic and SystemsPython

EE 5711: Power Electronic and Systems I.  SIMULATION PROJECTS FOR 2025 II.  PROJECT OBJECTIVES •   Students should be able to develop the analytical models of Power electronics systems •   Student should be able to carry out simulation/computer-based calculations •   Students should be able to evaluate the analysis and performance •   Student should be able to write a report III.  AREA OF WORK •   To study application of Modelling and Simulation of Power electronic systems •   You can choose any of the 4 areas of applications given below •   You will have to develop a model in Python or Octave or MatLab (m files only. Any Simulink based simulations will not be given any marks) •   Using the model, you will need to carry out performance analysis of the system •   You can carry out incremental changes in the power electronics system models and compare the performance •   As the objective is to learning by doing, you can refer to published work and use that as a basis of your problems IV.  TOPICS TO BE SELECTED FOR THE PROJECT For projects we will focus on developing averaged small signal models so that we can study stability, control performance in most power converter systems. Another area we will can choose to study is performance of PWM methods. We will not combine both of these as the models become very complicated and will be beyond the scope of this module Broad themes for selection of the project topics is listed below •   DC DC converter system modelling – Small signal stability analysis of DC - DC system (could be applied to battery charging/Solar PV etc/DC microgrid with 1 source, 1 load and 1 storage) – Small Signal Control model and analysis (you can select any controller) – PWM and spectral analysis of DC-DC converter •   DC to AC system – PWM analysis using FFT using Space vector Modulation or other modulation – PWM analysis in multilevel inverter – Averaged Models based control methods and analysis •   AC to DC system – Averaged model based control for applications like Battery charging, energy storage system •   If you want to do circuit based simulation, you may choose a circuit that you ant to simulate and use open source software like NgSpice or Xpice based Spice OPUS. Alternatively, you can use academic version of PSIM V.  TASKS TO BE DONE AND ASSESSMENT n,Octave,MatLab(Octavecompatible))50%50%Total100%

$25.00 View

[SOLVED] LLAW6055 Law of International Finance - Research Essay 2024-5

LLAW6055 Law of International Finance - Research Essay 2024-5 Instructions: 1.   Students may form. a group of two people to do this essay. 2.   You only need to submit one essay per group via Moodle. 3.   The assignment is due on 3rd April 2025, 11pm (HK time). 4.   You will need to submit this essay along with the HKU Faculty of Law coversheet with declarations about the use of AI. No coversheet and inaccurate information provided will receive a 50 percent penalty. 5.   The minimum word count for this essay is 2,000 words (excluding the questions, standardized wording of the LMA agreement, and references). The maximum word count is 4,000 words (excluding the questions, standardized wording of the LMA agreement, and references). 6.   All references and citations in this essay must use OSCOLA reference guidelines – see https://www.law.ox.ac.uk/sites/default/files/migrated/oscola_4th_edn_hart_2012.pdf 7.   Essays without any citations and references will be given the mark of zero. 8.   No late essays are permitted. 9.   Any information that are not provided in the questions below, but form. part of your answers or key to your answers must be stated as “assumptions” or “expectations” in your essay. Note that you may NOT make assumptions about Hong Kong laws (meaning no made-up laws). 10. Note that any information from websites must be cited and referenced. 11. Follow all additional instructions in the question below. Scenario: Your client is a newly established developer in Hong Kong, called “Big Dreams Property Group” is registered in Hong Kong. This joint-venture is made up several companies from Hong Kong, the Mainland and Indonesia. However, only the Mainland company has any experience in property development in Shanghai. Also, none of the companies in this group have any experience with advanced technology. Much of their business portfolio is made up of restaurants, hotels, car leasing, clothes manufacturing, shipping, and farming. They are applying for a syndicate loan for USD400 million for a property development project in Hong Kong’s Northern Metropolis. The developer (your client) shall be building new offices as well as residentials blocks at the San Tin Technopole, an innovation and technology centre which will include the Hong Kong-Shenzhen Innovation & Technology Park (HSITP) located on the Lok Ma Chau Loop and an Innovation & Technology Park in San Tin. It is a big project that is being promoted by the Hong Kong government. However, it is also a risky project because the area has yet to be developed, and there is no government funding, subsidies or financial support provided to developers. Besides, whether there will be many new start-ups and advanced technology companies establishing themselves at HSITP is unknown. Furthermore, there is lots of competition to house new start-ups and advanced technology companies in Macau, Guangzhou, and Hangzhou with many newly proposed technology parks. Your bank is a major and established bank in Hong Kong, called “Fa Tat” Bank. It is the arranger bank for this syndicate loan. You are an in-house counsel tasked to draft the contract (agreement) for the syndicate loan. Your task is to draft specific parts of this contract. Apart from the total figure stated in this question, all other figures in the contract and information about the client like assets for security you will need to state as assumptions or expectations. The question is in two folds (total marks 100): 1.   You are asked to draft three sections of the contract by your boss. First, the “financial convent”, “general undertakings” and “events of default” sections (Note that any information like numbers and etc. in drafting this section should be entirely made up [within reason]. The information assumed about the client, including figures, need not be accurate. However, state your assumptions, expectations, and legal concerns, if any.). Here you may include any security you require from your client and any assurances like negative pledges. You can use the Loan Market Association (LMA) agreement clauses 21, 22 and 25 as template for your answers (see Moodle under the news/announcement for the LMA agreement). However, if you have changed any of the standardized wording of the LMA clauses, you must ‘bold and underline’ them. You are NOT required to draft any taxation matters ofthe contract. Second, your bank is the arranger along with 4 other  banks (you can create made up names of the 4 banks for this essays), and so you will need to draft a separate section/clause of the contract (not part ofLMA agreement) to reduce the risk (fiduciary duties) by including limitations like a disclaimer (note the law use in this contract must be Hong Kong laws, including statutes and case laws). (40 marks) 2.   Also, you are to draft a memo to the Head of Commercial Lending of your bank at the Tokyo office to explain what you have done and why (including legal implications) you have chosen wordings in the three sections (in the above question) to your client and a separate clause for a contract between your bank and the other banks for the syndicate loan. State all the assumptions, expectations, and legal concerns you have about the loan that are relevant to the three sections and a separate clause between your bank and the other banks you have drafted. And how you think your draft can meet the requirements. Assume that the Head of Commercial Lending has some but not a lot of knowledge about Hong Kong laws. (60 marks)

$25.00 View

[SOLVED] DTS208TC Data Analytics and Visualisation Coursework 2

Module code and Title DTS208TC Data Analytics and Visualisation School Title School of AI and Advanced Computing Assignment Title Coursework 2 Submission Deadline 03/Apr/2025 Final Word Count N/A DTS208TC Data Analytics and Visualisation Coursework 2 Submission deadline: 11:59pm, 03/Apr/2025 Percentage in final mark: 50% Learning outcomes assessed: C. Select appropriate data analysis and visualisation methods to highlight particular features for a given data type and a set of analysis objectives or user requirements. Late policy: 5% of the total marks available for the assessment shall be deducted from the assessment mark for each working day after the submission date, up to a maximum of five working days Risks: •    Please read the coursework instructions and requirements carefully. Not following these instructions and requirements may result in loss of marks. •    Plagiarism results in award of ZERO mark. •    The formal procedure for submitting coursework at XJTLU is strictly followed. Submission link on Learning Mall will be provided in due course. The submission timestamp on Learning Mall will be used to check late submission. •    Academic Integrity Policy is strictly followed. Overview In this individual coursework, you will use python to analyse air quality data from the United States across multiple years and states, focusing specifically on California. The dataset includes various air quality metrics, population estimates, and yearly statistics from 2000 to 2022. Your task is to explore trends, identify patterns, and predict California’s Median AQI for 2022 using both data visualization and machine learning. Additionally, you will analyse and compare the predictions from these two methods. Dataset The datasetAQI By State 2000-2022 contains the following columns: Geo_Loc - Geographic location identifier. Year - The year of the observation. State - The name of the U.S. state. Pop_Est - Estimated population for the year and state. Dys_Blw_Thr - Number of days with air quality below a specific threshold. Dys_Abv_Thr - Number of days with air quality above a specific threshold. Good Days - Number of days classified as “Good” air quality. Moderate Days - Number of days classified as “Moderate” air quality. Unhealthy for Sensitive Groups Days - Number of days classified as “Unhealthy for sensitive groups” air quality. Unhealthy Days - Number of days classified as “Unhealthy” air quality. Very Unhealthy Days - Number of days classified as “Very Unhealthy” air quality. Hazardous Days - Number of days categorized as “Hazardous.” Max AQI - Maximum Air Quality Index recorded in a year. Median AQI - Median Air Quality Index recorded in a year. Submission and Requirements You are required to submit the following files as part of their coursework: 1.      Task-Specific Python Files: •       Each task must be implemented in a separate Python script file. •       Name the files should be task1.py and task2.py •       Your code needs to include appropriate comments and be well-documented. 2. Report: •       Complete the provided CW2_Report.docx •       Please include all source code and results in the report. •       Ensure that any non-obvious parts of your implementation are explained clearly in the report. •       The report should be submitted in .pdf format. 3.      Other •       The original dataset Tasks Given the dataset, you are expected to complete the following tasks using the Python programming language. You are allowed to use existing Python libraries to solve the tasks. T1. Nationwide Visualisation of Air Quality (45 marks) Explore how air quality has changed across the U.S. over time and analyse its geographic distribution, focusing on patterns and regional differences. Based on the given requirements, you need to select the most appropriate visualization designs (e.g., marks, channels, etc.). T1-1: Create a visualization showing the trends of Max AQI for all states from 2000 to 2022. T1-2: Create a visualization showing the distribution of Max AQI by different states for year 2022. T1-3: Create a visualization showing the distribution of air quality days (Good Days, Moderate Days, Unhealthy Days, Very Unhealthy Days and Hazardous Days) in California for the year 2000. T1-4: Describe the design ofT1-1, T1-2 and T1-3. Please fill in the required information in the report. T2. Predictive Analysis for California (55 marks) Focus on California’s air quality data to predict its Median AQI for 2022 using two approaches: visual analysis and model-based prediction. T2-1: Create 5 data visualisation results to show the relationships between California’s Median AQI and its influencing factors (Year (2000 - 2021), Pop_Est, Good Days, Moderate Days, Unhealthy Days) with suitable designs. T2-2: Based on the visualisation results, describe the relationship between these influencing factors and the Median AQI. Using these relationships and the 2022 influencing factors data for California, predict the Median AQI for California in 2022 without relying on model training. Justify the reason of your prediction. T2-3: Train a regression model using California’s data from 2000 to 2021. The model should aim to learn the relationships between Median AQI (target variable) and its influencing factors (Year (2000 - 2021), Pop_Est, Good Days, Moderate Days, Unhealthy Days). Choose 2 evaluation metrics to evaluate your model and discuss the performance. T2-4: Using the trained model and the corresponding factors data from California in the 2022, predict the 2022 Median AQI value for California. T2-5: Compare the results of the visual prediction from T2-2, the model-based prediction from T2-4, and the ground truth in the dataset. Discuss the differences and explain which approach you find more reliable and why.

$25.00 View

[SOLVED] BEES2041 Data Analysis for Life and Earth Scientists Practical report 1

BEES2041 Data Analysis for Life and Earth Scientists Practical report 1 Reproducible Research Summary Assessment title: Practical Report 1 Reproducible Research Weighting: 20% Due Date: Week 5, 11:59 pm, Friday 21st March 2025 Group work: No Length: Document up to 2000 words with up to 5 figures and tables, plus a video up to 3 minutes long. Submission requirements: You need to submit three files, as per instructions below. Feedback Details: written feedback will be provided on the returned work 2 weeks after the submission. Aligned CLOs: 3,4,5 Rationale Many scientific questions can be addressed using available data. This exercise exposes students to two common open-source data resources used in the biological, earth and environmental sciences: records of species observations and records of climate. Robust science requires the sharing of data sets and the code required to process, visualise, and analyse the data. When groups of researchers are working on the same problem, they also need to share their work prior to the study’s completion and publication. To give you experience in producing a document that would allow open sharing of all analytical methods, you will prepare notes and code that would detail each of the steps required to complete a data analysis. Assessment details Use of open resources to study climate preferences of plant You are working in a team investigating the climatic preferences of plant species. By climatic preferences, we mean the climate where these species typically occur in the wild. Your team’s task is to quantify the average annual rainfall experienced by each species in a genus and then answer the question of whether species differ in the climates where they occur. To answer this question, your colleague, Data Dan, has assembled a dataset with all known observations for plant species in Australia and the climate where each observation was recorded. To build this, Dan used two great open-source datasets: 1. Observation records: The Atlas of Living Australia is an online repository collating data about distribution of plants, animals, and fungi in Australia. Data are collated from a wide variety of sources, including plot surveys, herbarium records, and citizen science projects. These data are then contributed into GBIF – the Global Biodiversity Information Facility. Read more a https://www.ala.org.au and https://www.gbif.org/country/AU/about 2. Worldclim: A database of high spatial resolution global weather and climate data. https://www.worldclim.org/data/bioclim.html The resulting dataset was very large, so Dan randomly selected up to 100 observations per species. Data Dan is now handing the dataset to you to do the analysis. The data is in the folder `data`, with variables as described in the file `*-metadata.csv`. Your task is to select a single genus of plants, then using the dataset provided, and answer the following 1. Estimate a mean and confidence interval for the average rainfall experienced by each species. 2. Test whether species in the genus differ in their climate preference, measured as annual rainfall, and by how much? Please choose a genus in the dataset that somehow relates to your name. E.g. sounds similar or starts with the same letter. Details: Create a new project notebook (qmd or rmd file) in RStudio that contains code to fully execute your analysis, along with text to describe your analysis. Your report should include sections that a) Explain the motivation for the study and question being asked. b) Explain what genus you selected and how that relates to your name. c) Load any R packages that you require. d) Import the data set, extract relevant parts, and describe the data you are working with. e) Check for and remove any errors in the data. f) Explain any statistical tests run and interpret the results. g) Code to create graphs that visualise how the data addresses the question. h) Text to describe any limitations you perceive in this analysis. Assessment You are required to submit three files 1) RStudio notebook (.qmd or .rmd) file, containing the notes and code completing you entire analysis, from beginning to end 2) An html report generated from running your notebook file, and 3) A short video (up to 3mins) with a screen capture, where you explain how the code works. Upload those files in Moodle (found in the Assessment section) Further details will be provided on how this will be assessed. A reminder, the work submitted must be entirely your own work. Marking criteria Data organisation (10) - Successfully imports the data - Correctly identifies relevant variables - Identifies any errors in dataset and takes appropriate action Suitable statistical analysis (40) - Chooses appropriate statistical analysis to address the question and explains this choice - Successful fitting of model using appropriate variables - Extracts and presents relevant parameters and/or statistics of model - Appropriate interpretation of model, clear translation from statistical methods to biological context Effective use of figures to communicate results (20) - Chooses suitable graph types to display methods, data and/or results - Figures are easy to engage with: well structured, clear labels, caption; effective use of colour or symbols where appropriate - Figures suitably integrated with text Presentation (15) - Informative yet succinct text presenting material - Includes sections providing details on introduction, methods, results - Appropriate use of subheadings - Follows length guidelines. No more than 8 paragraphs or 2000 words. - Note: Discussion section NOT required Code (10) - Suitable choice of packages, functions and methods; brief text to explain these choices - Code is well structured, with comments explaining the purpose of specific lines where this is not immediately obvious from the code - Code successfully renders to produce html document - Best responses will be succinct and may take extra steps to hide unnecessary detail or outputs. Video explanation of code (15) - Briefly explain these 3 sections of your code: Data organisation, Statistical analysis, Creation of figures. For each section, choose 1-3 lines of code and explain how this works. - Maximum 3 minutes

$25.00 View

[SOLVED] ECON154 Business Statistics

ECON154 Business Statistics May Examinations 2024 Section A: Multiple-Choice Questions (45 marks) 1. You wish to make a pie chart displaying the share of individuals who have selected one of three options: A, B, or C. If you interview 250 people and 75 select C, how many degrees of the circle will C be? A) 90 B) 180 C) 270 D) 144 2. You have received data on temperatures in Celsius/Fahrenheit in London during May 2023. What type of variable is temperature? A) Nominal B) Interval C) Ordinal D) Ratio 3. What is the median of the following numbers: A) 8 B) 9 C) 7 D) 8.5 4. P(A) = 0.36, what is P(AC) A) 0.47 B) 0.87 C) 0.74 D) 0.64 5. Data is normally distributed with mean=0 and variance=1. What is the skewness of this distribution? A) 1.0 B) 0.5 C) 0.0 D) -0.5 6. Given the following scatter plot, the correlation between these two variables would be approximately what? A) Positive B) Negative C) Zero D) Independent 7. What is the probability a standard normal variable, Z, falls above 0.44? A) 0.1700 B) 0.3300 C) 0.6700 D) 0.0934 8. If the probability of an event, R, is 0.6 and the probability of the intersection of event R and another event S is 0.3, what is the probability of S conditional on R? A) 0.65 B) 4.00 C) 0.25 D) 0.50 9. What will be the relationship between the median and mean of the following distribution? A) Mean = Median B) Mean > Median C) Mean < Median D) Not enough information to say 10. A student's grade in a business course is comprised of attendance (5%), homework (20%), and a final exam (75%). Her scores for each of the categories are 70 (attendance), 95 (homework), 60 (final exam). Calculate her overall grade. A) 90.0 B) 80.0 C) 67.5 D) Not enough information to say. 11. Two variables, P and S, have a covariance of -15. P has a variance of 9 and S has a variance of 25, what is the correlation coefficient between P and S? A) 0.067 B) -1.00 C) 1.00 D) -0.067 12. In regression analysis, the variable that is being predicted is the A) response, or dependent, variable B) independent variable C) intervening variable D) is usually x 13. You have a binomial distribution where you draw 5 observations where the probability of success is 0.60. What is the standard deviation of this distribution? A) 1.58 B) 1.37 C) 1.73 D) 1.09 14. A regression analysis between sales (in $1000) and price (in dollars) resulted in the following equation, where x is price and y is sales: ŷ = 50,000 - 8x The above equation implies that an A) increase of $1 in price is associated with a decrease of $8 in sales B) increase of $8 in price is associated with an increase of $8,000 in sales C) increase of $1 in price is associated with a decrease of $42,000 in sales D) increase of $1 in price is associated with a decrease of $8000 in sales 15. If the interquartile range is 60 and the third quartile is 34, beyond what value would we consider an observation an outlier? A) 34 B) 96 C) 124 D) 94 Section B: Short-Answer Question (55 marks) Short Answer 1 (30 points) According to HealthX Institute, annual expenditure for prescription drugs is $838 per person in the country. A sample of 60 individuals from the Northwest region shows a per person annual expenditure for prescription drugs of $745. Use a population standard deviation of $300 to answer the following questions. a)  Formulate the null and alternative hypotheses for a test to determine whether the sample data support the conclusion that the population annual expenditure for prescription drugs per person is lower in the Northwest than in the rest of the country. [5 points] b)  Compute the value of the test statistic. [5 points] c)  What is the p-value? [5 points] d)  At α = 0.01, what is your conclusion? [5 points] e)  What is a Type I error in this situation? [5 points] f)   What is a Type II error in this situation [5 points] Short Answer 2 (25 points) The Dow Jones Industrial Average (DJIA) and the Standard & Poor’s 500 (S&P 500) indices are used as measures of stock market movement. Based on 15 weeks of closing prices for both DJIA and S&P 500, a regression equation was estimated by regressing the S&P500 closing price on the DJIA closing price as the independent variable. The following table provides the regression output: a)  Develop the estimated regression equation. [5 points] b)  Is there a significant relationship between S&P 500 closing price and DJIA closing price? Test for the significance at α = 0.05. [5 points] c)  Does the estimated regression equation provide a good fit? Explain. [5 points] d)  Suppose that the closing price for DJIA is 13,500. Predict the closing price for S&P 500. [5 points] e)  Provide an interpretation for the slope of the estimated regression equation? [5 points]

$25.00 View

[SOLVED] CIT 596 - HW5

CIT 596 - HW5 (deadlines as per Gradescope) This homework deals with the following topics * DAGs , Top sort. * BFS. * DFS. Student Name: 〈 Your Name 〉 Collaborator(if any) : 〈 Your Collaborators 〉 (at most 2 other collaborators.) Please read these instructions • No handwritten solutions. Handwritten solutions get a 0. • WEAPARTE = Write an Efficient Algorithm in plain English (or pseudocode if you must), Analyze its Run Time, with an Explanation. • The homework has to be submitted in electronic form. as a pdf file. You can use any editing software you want in order to produce the pdf. • Unless otherwise specified you HAVE to explain your answer. Please write explana-tions. • If any question involves making a graph you have to tell us what the vertices are and what the edges are before you apply any algorithm. In many cases the question actually is mainly about how some kind of real world problem can be mapped to a graph theory problem. • You cannot use any inbuilt function of your favourite programming language. If you are unsure what is allowed and what is not allowed in pseudocode, please look at the book. Remember that you are writing an algorithm and not Java or Python or Scala or whatever. • If a question has a recursive algorithm, you have to provide the recurrence and then solve it using some theorem or just expanding the recurrence. • As with all HWs, we are looking for the most efficient algorithm in terms of running time. The algorithm also has to be correct (you do not have to worry about graphs with no edges and small edge cases like that though). • We will not worry about the distinction between O and Θ. As long as you provide a tight bound (for example if an algorithm is actually O(n log n) and you say O(n3) you will lose points) you are fine. • If you are explaining the running time of a graph, it is important to mention whether your graph is being stored in an adjacency list representation or an adjacency matrix representation. • The number of points associated with a question is a general guideline for toughness and/or amount of writing we expect. Sometimes a question might be worth 4 or 5 points only because it is lengthy. • If you want to draw a graph for your HWs I would recommend using google drawing or graphviz or draw.io. Once you have a picture just use the includegraphics command (as you can see in this tex file). • A result shown in class and/or provided by the textbook (Algorithms Unlocked) can be used as is. • You are allowed to write things like ”Run merge sort” or ”Run binary search”. Re-member that if you do so, we will assume you mean the standard (as per the text-book/lecture) algorithm. If you want to modify an algorithm then it is your job to write out the full pseudocode. You are allowed to use the pseudocode from the textbook or from CLRS as a starting point. Questions 1. (2 points) Assume that you want to store what we’ll call an Insta graph. Each and every person with an Instagram account is a vertex, and if person X follows person Y there is an edge from X to Y. Will you pick the adjacency list representation or the adjacency matrix representation for this graph. Why? You are allowed to and encouraged to see what following a person on Instagram means in case you do not know how Instagram works. 2. (4 points) Given the graph below (adjacency list representation), run a DFS on it start-ing from vertex b. Please compute the discovery times and finish times for every single vertex. Also, classify all edges of the graph as either tree edges, forward edges, back edges, or cross-edges. 3. (4 points) We are trying to figure out the number of collaboration groups in CIT5960. We have been given all the information in terms of a file that has each person’s collabo-rators that they listed when they submitted their HW (imagine the TAs wrote some cool code to do this from your pdf submissions). For instance we have info like this (Boulos listed no collaborators) Arvind - Boulos, Paul Paul - Boulos Yi - Arvind George - John, Muhammad Muhammad - George, John Boulos - John - George, Muhammad We know that this file has the following rules • Every student in class appears exactly once. • Every student is going to list at most 2 other people. • There are N students in this class. Using this information, determine the number of collaboration groups using the BFS algorithm as your helper. In the above example there are 2 collab groups. [George, Muhammad, John] and [Yi, Boulos, Paul, Arvind]. The fact that it breaks perhaps some rule of the course is imma-terial. It is also reflective of the real world but let us not get swayed by emotions over here. Write your approach using just plain English. Then analyze your algorithm for runtime with an explanation. 4. (5 points) A tree is an undirected, connected, acyclic graph. Given an undirected graph G(you know this is undirected) in adjacency list representation WEAPARTE to figure out whether the graph is a tree or not. 5. (10 points) You are given the task of assisting a group of historians make sense of some data that they have procured by conversing with citizens of a remote island to learn about the lives of people who have lived there over the past 100 years. From these interviews, they’ve learned about a set of n people (all currently dead). Let us denote them as P1, P2, . . . , Pn. They have a collection of relative facts about these people. There might not be a fact about every pair of people. It is also possible to not have any fact about someone. Also (for the sake of us not having to deal with some quirks) the historians are told that no person died on the same day that someone else was born. Each fact either tells us • person Pi died before Pj was born. If you are concerned about how exactly this input is provided, think of it as a list of ordered pairs. • person Pi overlapped at least partially with person Pj . If you are concerned about how exactly these facts are provided, think of it as a list of unordered pairs. They are not sure that all these facts are correct. It has been many years and the islanders have not kept good records. So what they would like you to determine is whether the data they have collected is at least internally consistent - can this set of people actually satisfy all the facts that you got. WEAPARTE for this problem. This problem is slightly different in that the main goal is to construct a ‘useful’ graph from the facts that have been provided. After that graph is created, the problem is relatively simple. Please walk us through how you construct the graph. What are the vertices? What are the edges? Your algorithm should either say ‘OK’ meaning that all the facts could hold true, or it should report (correctly) that no such dates can exist - that is, the facts collected are not internally consistent.

$25.00 View

[SOLVED] Data Driven Business

INDIVIDUAL PROJECT •     Course: Data Driven Business The Hospitality Industry Benefits From the Emergence of Big Data When it comes to adopting technology as well as evolution driven by data, the hospitality industry is known to lag behind. Only recently that the industry are learning that there are gaps that should be filled, gaps which exist since they could not possibly be foreseen or controlled. Today,  Big  Data  allows  the  hospitality  industry  to  take  better  control  of their  business. Furthermore, insights from Big Data enable them to drive profits and lower costs faster and easily so they could go back to doing what they love. Some hospitality industry benefits from the emergence of big data: •   With Big Data and analytics hotels could target their best repeat clients. Moreover, they provide extra promotions and incentives while creating perks that help boost a business. •   Big Data analytics are valuable in helping hotels set the best prices for their rooms. The kind of optimization further extends other services that the hospitality industry provides. •   Big Data and analytics can be used to segment guests according to behaviour, booking trends and other factors to reveal the chance to respond to promotions. It is important for hoteliers to understand guest preferences and more. The hospitality sector caters to millions of people on a daily basis. Meeting the expectations of these people is the key to getting them to return and avail of their services again and again. With customer data all gathered in a single place wherein it is easier to paint and see the huge picture, hotels could make better informed decisions when it comes to marketing and customer service. Through using analytics, hotel could target their best repeat clients with extra promotions as well as incentives, while building perks and separate deals that do not visit as often, with the hopes of boosting their business. WORK TO DO •   Choose any hospitality company for your report •   Use Secondary research or existing data (If possible) to define the management problem of the company   or to identify Data and information for your report •   Using Data analysis software such as SPSS or Tableau it’s optional Using the 10 steps of the Data Strategy, develop the all process of the development of Data strategy for the company you choose. Please indicate what the objective of each phase is as well as the main activities you will have to undertake. Use examples in each step in order to explain better your approach. Step 1 : What is a management / business problem ? A problem is a gap between a current unsatisfactory situation and a situation defined as satisfactory For Step 1, use CANVAS such as Data analytics strategy / Data management canvas / Data strategy is highly recommended but not mandatory Step 2: Define a strategic objective Based on the company you choose for your report, and in order to define the  1  strategic objective, you have to identify: •   What is the main business problem of the company? •   Why you think that the strategic objective you choose will be the optimal solution comparing to other? Examples of strategic objectives: •   Increase the traffic on their website, •   Increase loyalty customer rate, •   Customer acquisition strategy, •   Reduce operational cost, •   Increase occupation rate, •   Develop new product or service, •   Etc. Step 3: Data Needs & Data Collection Based on the business case / problem or on the strategic objective: •   Identify which data needed •   Identify the type of the data (structured vs unstructured) •   How evaluate the quality of the data ? Step 4, 5 & 6 : Data Sources, Data Extraction & Data Storage •   What are the different sources of the data needed? •   How Data will be extracted and acquired ? •   Where data will be stored? Please specify if data have to be cleaned, transformed or consolidated Step 7: Data Analysis & Data Modelling Which type of data analysis will be used to achieve the strategic objective? You have to define: •   Data model analysis •   Identify the main methods and algorithms you will use and why? •   What are the main results expected? Step 8: Data Team Which type of competences and  skills  are needed  for the project (data  engineering, data architect, data analyst, data scientist, business analyst, etc.). Step 9: Test phase Specify if is needed to have a test phase to evaluate the quality of the model / solution obtained. If yes specify how you will do it and evaluate the results to validate the final solution. Step 10: Implementation of the final solution Describe how you will implement the final solution

$25.00 View

[SOLVED] ENVI5705 Assessment 2 Field Report

ENVI5705 – Assessment 2: Field Report Your task The report consists of five questions about different issues in designing and executing sampling protocols to answer biological questions. You are required to use data collected during the field trip to write the report and you can use data from any taxonomic group to answer any of the questions. However, you need to use ALL the taxonomic groups somewhere in your report and the best answers will include data from more than one taxon. The Questions 1) Discuss sampling solutions to tackle spatial variation. 2) Discuss possible biases associated with at least two of our sampling techniques. 3) Discuss the importance of incorporating temporal variation into sampling designs. 4) Contrast direct searching vs attractive trapping techniques. 5) Discuss the issues associated with obtaining samples from highly diverse communities. Report Length Body of the report 4000 words maximum including any figures or tables, but NOT including the references. Referencing style Use Harvard Style. which uses the Author-Date approach. See here: https://libguides.library.usyd.edu.au/citation Grading This assessment is worth 45% of your final mark. A marking matrix is provided in Canvas under Assessments Tab. Late Penalties Deduction of 5% of the maximum mark for each calendar day after the due date. After ten calendar days late, a mark of zero will be awarded. Points to note • It is expected that you will carry out your own statistical analysis of the datasets to demonstrate your understanding of analysis and interpretation. • All data sets must be used in your report. • For the report, create your own graphs and figures to illustrate your answers. • Do not present raw data. • You should aim to write about 2 pages for each question and include no more than 2-3 figures for each question. • The content of the report will be discussed during class.

$25.00 View

[SOLVED] ECON154 - STATISTICAL FOUNDATIONS OF BUSINESS ANALYTICS

ECON154 - STATISTICAL FOUNDATIONS OF BUSINESS ANALYTICS Group Project in MS Excel [100 marks] Instructions 1. You will submit two files as part of your group project – a report in 1,000 words (MS Word document) and a supplementary data file (MS Excel file). 2. Please make sure to label the question number in your data file for each constructed table when submitting the supplementary data file. You may put each question’s tables and charts on a new sheet for better communication. 3. Unmarked tables and charts in the supplementary file will not be graded. 4. Wherever possible, describe the chart or table in your own words. 5. The University of Liverpool’s academic integrity policy applies. 6. Late submissions will be subject to the University of Liverpool’s late submissions policy. Study Context In November 2014, the city of Berkeley in California implemented a tax on sugar-sweetened beverages (SSB) sellers, to discourage SSB consumption due to health issues like diabetes and obesity. The tax of one cent per fluid ounce meant that if retailers raised their prices to exactly counter the effects of the tax, a $1 can of soda (12 oz) would now cost $1.12. But did sellers respond this way? In this project, we will make before-and-after comparisons using data on sugar beverages to learn about the effects of the sugar tax. To do so, we compare the outcomes of two groups, both before and after the policy took effect: · The treatment group: those who were affected by the policy · The control group: those who were not affected by the policy. Before-and-after comparisons of retail prices We will look at price data from the treatment group (stores in Berkeley) to see what happened to the price of sugary and non-sugary beverages after the tax. a) Download the ‘Dataset Project Sugar Tax’ Excel dataset. b) The first tab of the Excel file contains the data dictionary. Make sure you read the data description column carefully, and check that each variable is in the Data tab. We will compare the variable price per ounce in US$ cents (price_per_oz_c). We will look at what happened to prices in the two treatment groups before the tax (time = DEC2014) and after the tax (time = JUN2015): · treatment group one: large supermarkets (store_type = 1) · treatment group two: pharmacies (store_type = 3). Before doing this analysis, we will use summary measures to see how many observations are in the treatment and control group, and how the two groups differ across some variables of interest. For example, if there are very few observations in a group, we might be concerned about the precision of our estimates and will need to interpret our results considering this fact. You may use Excel’s PivotTable option to make frequency tables containing the summary measures that we are interested in. The tables should be in a different tab to the data (either all in the same tab, or in separate tabs). Question 1 [15 marks] Create the following tables: a) A frequency table showing the number (count) of store observations (store type) in December 2014 and June 2015, with ‘store type’ as the row variable and ‘time period’ as the column variable. For each store type, is the number of observations similar in each time period?    [5 marks] b) Two frequency tables showing the number of taxed and non-taxed beverages in (i) December 2014 and (ii) June 2015, with ‘store type’ as the row variable and ‘taxed’ as the column variable. (‘Taxed’ equals 1 if the sugar tax applied to that product, and 0 if the tax did not apply). For each store type, is the number of taxed and non-taxed beverages similar?   [5 marks] c) A frequency table showing the number of each product type (type), with ‘product type’ as the row variable and ‘time period’ as the column variables. Which product types have the highest number of observations, and which have the lowest number of observations? Why might some products have more observations than others?    [5 marks] Question 2 [15 marks] Next, we are interested in comparing the mean price of taxed and non-taxed beverages, before and after the tax. Calculate and compare conditional means: a) Create a table similar to Figure 1, showing the average price per ounce (in cents) for taxed and non-taxed beverages separately, with ‘store type’ as the row variable, and ‘taxed’ and ‘time’ as the column variables. Make sure to only include non-supplementary products (supp = 0). Make sure to keep only store types 1 and 3 in the table.      [5 marks] b) Without doing any calculations, summarize any differences or general patterns between December 2014 and June 2015 that you find in the table. [5 marks] c) Would we be able to assess the effect of sugar taxes on product prices by comparing the average price of non-taxed goods with that of taxed goods in any given period? Why or why not?    [5 marks] Non-taxed Taxed Store type Dec 2014 Jun 2015 Dec 2014 Jun 2015 1 3 Figure 1. The average price of taxed and non-taxed beverages, according to time period and store type. Question 3 [20 marks] To make a before-and-after comparison, we will make a chart to show the change in prices for each store type. Using your table from Question 2: a) Calculate the change in the mean price after the tax (price in June 2015 minus price in December 2014) for taxed and non-taxed beverages, by store type.   [10 marks] b) Using the values you calculated in Question 3(a), plot a column chart to show this information (as done in Figure 2 below) with store type on the horizontal axis and price change on the vertical axis. Label each axis and data series appropriately.   [10 marks] Figure 2. the change in average beverage prices before and after the tax by store type for taxed and non-taxed beverages. Question 4 [10 marks] To assess whether the difference in mean prices before and after the tax could have happened by chance due to the samples chosen (and there are no differences in the population means), we calculate the p-value. Let the p-value be 0.04 for large supermarkets and 0.65 for pharmacies in this case. Based on these p-values and your chart from Question 3, what can you conclude about the difference in means?   [10 marks] Question 5 [40 marks] Follow these data manoeuvre instructions for the next set of questions [5 marks]: · Delete supp=1 observations (1162 observations left for Dec 2014 and June 2015). · Create a new variable labelled time_code with two values 0 and 1, such that time_code=0 if time=DEC2014 and time_code=1 if time=JUN2015. · Create a new variable labelled time_tax with two values 0 and 1, such that time_tax=1 if both time_code=1 and taxed=1, time_tax=0 otherwise. See the table below for specific values: Time_code Taxed Time_tax 0 0 0 0 1 0 1 0 0 1 1 1 Now we regress beverage prices on the dummy variables ‘taxed’, ‘time_code’, and the interaction of the two dummies – time_tax. a) Run the following regression in excel:  use ‘price_per_oz_c’ as the dependent variable and three independent variables - ‘taxed’, ‘time_code’, and ‘time_tax’. Report the regression table.   [10 marks] b) Develop the estimated regression equation.    [5 marks] c) Test for a significant relationship for all three independent variables. Use α = 0.05.    [5 marks] d) Did the estimated regression equation provide a good fit? Explain.    [5 marks] e) Do you believe the above estimated regression provides a good prediction/explanation of the impact of sugar tax on beverage prices? Why or why not? Think about a thought experiment you would do to test the impact of the sugar tax on beverage prices.   [10 marks]

$25.00 View

[SOLVED] ELEC6234 /

ELEC6234 Embedded Processor SynthesisELEC6234 Embedded Processor SynthesisCourseworkSystemVerilog Design of an Application Specific EmbeddedProcessorIntroductionThis exercise is done individually and the assessment is:?By formal report describing the final design, its development, implementation andtesting.?By a laboratory demonstration of the final design on an Altera FPGA development systemUse the following code for the top-level module picoMIPS4test.sv and the clock dividercounter.sv. The purpose of the clock divider is to eliminate bouncing effects of themechanical switches which are used to input data as outlined by the pseudocode below.File picoMIPS4test.sv:// synthesise to run on Altera DE1 for testing and demomodule picoMIPS4test(input logic fastclk,// 50MHz Altera DE0 clockinput logic [9:0] SW, // Switches SW0..SW9output logic [7:0] LED); // LEDslogic clk; // slow clock, about 10Hzcounter c (.fastclk(fastclk),.clk(clk)); // slow clk from counter// to obtain the cost figure, synthesise your design without the counter// and the picoMIPS4test module using Cyclone V 5CSEMA5F31C6 as target// and make a note of the synthesis summarypicoMIPS myDesign (.clk(clk), .SW(SW),.LED(LED));endmoduleFile counter.sv:// counter for slow clockmodule counter #(parameter n = 24) //clock divides by 2^n, adjust n ifnecessary(input logic fastclk, output logic clk);logic [n-1:0] count;always_ff @(posedge fastclk)count

$25.00 View

[SOLVED] ECON3106 Politics and Economics Exercises 2

ECON3106 Politics and Economics Exercises 2 1    Audits in Brazil All the questions this Section refer to the paper:  "Audit risk and rent extrac- tion:  Evidence from a randomized evaluation in Brazil - Zamboni and Litschig (2018)" 1.1    Describe ONE situation under which even if the audit risk  of a  municipality  increases  we  could  expect  no change in the rent extraction. 1.2    Describe the policy change in May 2009 that the au- thors use to identify the e  ects of auditing. 1.3    Do you think the following statement is true or false? "The authors   nd that the reduction in rent extrac- tion  had  negative  e  ects  on  the  welfare  of the  citi- zens of the a  ected municipality".   Justify your an- swer with ONE piece of evidence 2    Corruption A small town has 60 voters and 2 types of jobs.  20 voters work as farmers (f) and 40 voters work in the mine (m).  The small town must decide whether to build a water damn (d = 1) or not (d = 0).  In case the water dam is not built everybody earns 8. If the water dam is built farmers the earning of the farmers increase by 10. Miners do not bene  t from the construction of the water dam. The cost of the water dam is 180 and in case is built must be   nanced equally by all 60 voters through a head tax.  All voters care about the di  erence between their earning and the taxes they have to pay 2.1    Write  down  the  utility  function  of a  farmer  and  a miner  in  the  case  that  the  water  dam  is  not  built. Do the same in the case the water dam is built 2.2    Is  building  the  water  dam  e   cent  according  to  the social welfare citerion? Now imagine that the town's major works in the mine and he is the only one with the power to decide whether the water dam should be built.  A group of 5 farmers decides to bribe the major by paying 2 each only if water dam is built. 2.3    Will the major accept the bribe? 2.4    Is bribing e   cient according to the social welfare cri- terion? 3    An election A society of 99 voters must choose a policy p in the space (0, 1).  Voter i's utility is given by Two candidates, A and B , propose platforms .30 and .505, respectively. Candi- dates compete in a plurality election and each voter is expected to vote for the candidate whose platform he likes the most. Each candidate only wants to win the election. 3.1    What is the bliss point of voter i? 3.2    Voters exhibit single-peaked preferences.   Show that this is true for i = 40. 3.3    How many votes will each candidate receive? 3.4    According to the Median Voter Theorem, what should we expect the two candidates to propose? 3.5    Suppose now that there are 3 candidates:  A, B , and C.  The candidates' platforms are respectively  .1,  .5, and .9.  You are told that voters are strategic and all voters i ≤ 50 will vote for A and all voters i > 50 will vote for C (i.e., nobody is voting for B). Is this a Nash Equilibrium?  (brie  y explain your logic) 4    Lobbying In order to bring fast internet connection to the rural areas a country has to decide whether to build new antennas outside the cities.  Givent that there is the same number of people living in the cities and the rural areas in case the antenna is built the total construction cost of the antennas of 10 Millions AUD will be equally split between rural towns and the cities.  Rural towns are lobbying for the construction of these antennas spending a total of 1 Million AUD$ in their lobbying activities.  The cities instead are spending 3 Millions AUD$ against the construction of the antennas.  The probability that the antennas are built is described by the following contest success function: 4.1    Given the lobbying e  orts what is the probability that the bill is introduced The social welfare of the rural towns before taking account any lobbying activity and taxes is 10 Millions AUD$ in case the antennas are not built and it increase by 15 Millions AUD$ in case the antennas are built.  The social welfare of the cities before taking account any lobbying activity and taxes is instead always 80 Millions AUD$ independently if the antennas are built or not. 4.2    Calculate the total social welfare of this country tak- ing into account taxes and lobbying e  orts. 4.3    Compare this previous situtation to one where there is no lobbying and the antennas are not built. Which one  is  a  preferred  situation  according  to  the  social welfare criterion.

$25.00 View