Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] EEC 210 Analysis and design of analog integrated circuits HW 5 R

EEC 210 HW 5 1.       The band-gap reference shown below is designed to have nominally zero TCF   at 25°C.  Due to process variations, the saturation current IS  of the transistors is actu- ally twice the nominal value.  Assume  VOS   = 0.   What  is  dVOUT /dT  at  25°C? Neglect base currents. 2.       Simulate the band-gap reference  from the previous problem on SPICE.  Assume that the amplifier is just a voltage-controlled voltage source with an open-loop gain of 10,000 and that the resistor values are independent of temperature.  Also assume that IS1  = 1. 25 × 10−17  A and IS2   = 1 × 10−16  A.  In SPICE, adjust the closed-loop gain of the amplifier (by choosing suitable resistor values) so that the output TCF  is zero at 25°C.  What is the resulting target value of VOUT?  Now double IS1  and IS2 . Use SPICE to adjust the gain so that VOUT  is equal to the target at 25°C.  Find the new dVOUT /dT  at 25°C with SPICE.  Compare this result with the calculations from the previous problem. 3.       Calculate the bias current of the circuit shown below as a function of R, µn Cox , and the device sizes.  Comment on the temperature behavior of the bias current.  For simplicity, assume that Xd   = Ld   = 0 and ignore the body effect. 4.       The  circuit used  in the previous problem produces a  supply-insensitive current. Calculate the ratio of small-signal variations in IBIAS  to small-signal variations in VDD  at low frequencies.  Ignore the body effect but include finite transistor ro  in this calculation.

$25.00 View

[SOLVED] TEC100 Introduction to Information Technology Assessment 2 C/C

Assessment 2 Information Subject Code: TEC100 Subject Name: Introduction to Information Technology Assessment Title: Report Assessment Type: Individual Word Count:                    1,500            Words                             (+/-10%) Weighting: 35 % Total Marks: 35 Submission: MyKBS Due Date: Week 10 Your Task This assessment is to be completed individually. In this assessment, you will write a report on the emerging technologies covered in Weeks 4 to 8 of TEC100, and their potential impact on the IT industry and society. Assessment Description Ensure to include a range of emerging technologies that were covered in Weeks 4 to 8 of TEC100 and apply them to the case study, which is outlined below. The recommended emerging technologies to cover may vary from student to student, however, it is recommended to discuss about four (4) emerging technologies, which you may pick and choose based on the topics covered in Weeks 4 to 8 of TEC100. Case Study: You are a technology consultant working for a small IT firm. The firm has been approached by a large organisation that wants to explore the potential of the emerging technologies covered in Weeks 4 to 8 of TEC100. As a consultant, you have been tasked with preparing a report that outlines the potential benefits and risks associated with these emerging technologies. This assessment aims to achieve the following subject learning outcomes: LO1 Describe basic concepts of emerging technologies. Assessment Instructions •   Your report should be submitted in Word Document or PDF format and be approximately 1500 words in length. The report should include the following sections: 1.  Introduction: A brief introduction to the emerging technologies covered in Weeks 4 to 8 of TEC100. (Recommended ~100 words) 2.  Background: Theoretical underpinnings or business cases for the emerging technologies covered in Weeks 4 to 8 of TEC100. (Recommended ~300 words) 3.  Discussion: An in-depth discussion of the emerging technologies covered in Weeks 4 to 8 of TEC100, including their potential impact on the IT industry and society. (Recommended ~1000 words) 4.  Conclusion: A summary of the report, including key findings and recommendations for future research. (Recommended ~100 words) 5.  Reference: The report should include a minimum of six (6) academic references from credible sources, such as peer-reviewed journal articles, industry reports, or academic books. These references should be cited using KBS Harvard referencing style. •   Please refer to the assessment marking guide to assist you in completing all the assessment criteria.  

$25.00 View

[SOLVED] Econ 1150 Applied Econometrics Mini-Exam 2 C/C

Econ 1150 Applied Econometrics Mini-Exam 2 1. A standardized test is given to two different school districts. In High School 1, 100 (n1) students are randomly selected to take the test and the sample mean of test scores is 75 (Y¯ 1 act) and the sample variance is 196 (s 2 1 ). In High School 2, 90 (n2) students are randomly selected to take the test and the sample mean of test scores is 72 (Y¯ 2 act) and the sample variance is 100 (s 2 2 ). (a) For High School 1, we want to test the hypothesis that the population mean of test scores is equal to 72. State the null and alternative hypotheses and calculate the p-value for this test. Can you reject the null hypothesis at the 5% significance level? What about at the 1% significance level? (b) We want to conduct a hypothesis test to see if the mean test scores differ between High School 1 and High School 2. Construct the 90% confidence interval for the observed difference in sample means Y¯ 1 act − Y¯ 2 act. Using the confidence interval, can you reject the null hypothesis that mean test scores do not differ between High School 1 and High School 2 at the 10% significance level? 2. You’re trying to estimate the effect of work experience (tenurei), measured in number of years, on hourly wages (wagei) and you’ve estimated the following regression equation (standard errors in parentheses): (a) Calculate the value of the observed t-statistic (t act) for the null hypothesis that the coefficient on tenurei is equal to zero. Can you reject the null hypothesis at the 5% significance level? (b) Interpret the constant term and the coefficient. 7.124 is the expected wage rate of someone with 0 years of experience. One more year of experience increases the wage rate by $0.162. 3. You run a regression of hourly wages (wagei) on college-status (collgradi) in Stata. collgradi is a random variable that is equal to 0 if individual i is not a college graduate and equal to 1 if individual i is a college graduate. This is the regression output: (a) From the table above, can you reject the null hypothesis that the coefficient on collgradi is equal to zero at the 1% significance level? Which part of the table tells you this? (b) Interpret the coefficient on collgradi and the constant term. (c) What is the predicted hourly wage for a college graduate?

$25.00 View

[SOLVED] Math 425 Fall 2024 - HW 12 Java

Math 425 Fall 2024 - HW 12 Due Friday 11/22, 11:59pm, via Gradescope Please note: (1). Please include detailed steps. Only providing the result will not get full credits. (2). Please write at most one problem in each page. If you reach the bottom please start a new page instead of writing two columns in one page. If a problem contains multiple small questions, you may write them in one page. (3). Please associate pages with problems in Gradescope. For density functions we omit the statement ”f=0 otherwise” for con-venience. 1. Consider N independent flips of a coin having probability p of landing on heads. Say that a changeover occurs whenever an outcome differs from the one preceding it. For instance, for HHT HT there are 3 changeovers. Find the expected number of changeovers. Hint: Express the number of changeovers as the sum of Bernoulli random vari-ables. 2. The joint density function of X and Y is Find E[X] and E[Y], and show that Cov(X, Y) = 1. 3. Use the same density function as in Problem 2 to find E[X2|Y = y]. 4. A fair 6-side die is successively rolled. Let X and Y denote, respectively, the number of rolls necessary to obtain a ”6” and a ”5”. (1). Find E[X]. (2). Find E[X|Y = 1]. (3). Find E[X|Y = 5]. 5. The joint density function of X and Y is Find E[Y3|X = x]. 6. An urn contains 30 balls: 10 red, 8 blue, 12 yellow. Pick 12 balls randomly. Let X and Y denote the number of red and blue balls that are withdrawn. Calculate Cov(X, Y) by defining appropriate indicator (Bernoulli) random vari-ables

$25.00 View

[SOLVED] Database Development and Design DTS207TC Python

Module code and Title Database Development and Design (DTS207TC) School Title School of AI and Advanced Computing Assignment Title 001: Assessment Task 1 (CW) Submission Deadline 23:59, 15th Dec (Friday) Final Word Count NA Database Development and Design (DTS207TC) Assessment 001: Individual Coursework Due: Dec 15th, 2024 @ 23:59 Weight: 60% Maximum Marks: 100 Overview & Outcomes This course work will be assessed for the following learning outcomes: A. Identify and apply the principles underpinning transaction management within DBMS. B. Demonstrate an understanding of advanced SQL topics. E. State the main concepts in data warehousing and data mining. Submission You must submit the following files to LMO: 1)A report named as Your_Student_ID.pdf. 2)A directory containing all your source code, named as Your_Student_ID_code. NOTE: The report shall be in A4 size, size 11 font, and shall not exceed 8 pages in length. You can include only key code snippets in your reports. The complete source code can be placed in the attachment. Assessment Tasks Matrix multiplication is a fundamental operation in linear algebra where two matrices are multiplied to produce a new matrix. Specifically, if we have two matrices A and B, the product  of these matrices, denoted as AB, is calculated by taking the dot product of the rows of A with the columns of B. For example, for two matrices of dimension 2x2, their matrix multiplication formula is To test your proficiency in SQL under an open-book setting, this assignment requires you to implement matrix multiplication using SQL. It is divided into the following steps: 1)   The Python function in the attachment is capable of generating an N-dimensional square matrix composed of random numbers in the format of (row_id, col_id, value). First, use a Python program to invoke this function and generate such a matrix, then import it into a table M in PostgreSQL. Additionally, discuss the impact of database transaction mechanisms on the performance of record insertion. Record the program running time (ideally, 

$25.00 View

[SOLVED] CS 314 Project 2 Boolean Satisfiability Solver Python

CS 314 Project 2: Boolean Satisfiability Solver 1 Introduction In this project, you will implement a Boolean satisfiability  (SAT) solver for OCaml.  The program takes a string representing a Boolean formula as input. This formula is in conjunctive normal form (CNF). CNF is a way of organizing logical statements used in mathematical logic and computer science.  A state- ment is in CNF if it is a conjunction (AND) of one or more clauses, where each clause is a disjunction (OR) of literals, and a literal is either a variable or the negation of a variable. For example, a statement in CNF might look like this: (a OR NOT b) AND (b OR c OR NOT d) AND (d OR NOT e) In our project,  a literal also includes two boolean constants TRUE and FALSE. Your program should return a list of variable assignments that make the CNF formula true. For instance, to make (AND a b) true, both “a” and “b” need to be TRUE. To make (OR a b) true, there are three possible solutions, both “a” and “b” are TRUE, or “a” is TRUE, “b” is FALSE, or “b” is TRUE, “a” is FALSE. The context-free grammar for the CNF formula is straightforward and de- fined in our project as below: 1.  < E >   ::=  (  < W >  ) AND  < E >  |  (  < W >  ) 2.  < W >   ::=   < T >  OR  < W >  |   < T > 3.  < T >   ::=   < V >  | NOT < V > 4.  < V >    ::=  a  |  b |  ... | z | TRUE | FALSE 2 Input and Output of the Program The input to the program is a string list. An example is below: “( a OR b OR NOT c ) AND ( NOT a ) AND TRUE” The program’s output is of the type (string, bool) list, which gives the first possible boolean value assignment of the variables to satisfy the CNF formula. An example is below: [(“a”, false); (“b”, true); (“c”, false)] If such an assignment doesn’t exist, please return a list of one tuple, as in the example shown below: [(“error”, true)] 3    Basic Function Implementations First, you need to break down the input string into a string list by eliminating   whitespaces. This function is already provided to you in the “project2 driver.ml” file, called “tokensListFromString (str : string)” . Next,  you will need to implement a basic called  “partition”  in  the code package given to you.  The purpose of the “partition” function is to break down a string list with respect to a delimiter. It will help you get the input string list represented in a way that facilitates further processing. Name                                             Input Type                                     Output Type partition                                    (string list), string                                 string list list getVariables                                     string list                                          string list generateDefaultAssignments              string list                                    (string * bool) list generateNextAssignments           (string * bool) list                          (string * bool) list * bool lookupVar                             (string * bool) list, string                                   bool Table 1: Required Functions The input and output type of the  “partition” function is given in Table 1. An example of the input and output of the “partition” function is below: Name:  partition Input:  ["(";  "a";  "OR";  "NOT";  "b";  ")";  "AND";  "(";  "b";  ")"]  "AND" Output:  [["(";  "a";  "OR";  "NOT";  "b";  ")"];[  "(";  "b";  ")"]] There are a few other basic functions you have to implement, listed below: The next function you will have to implement is called  “getVariables” .  It takes a string list and gets the list of the variable names from the CNF. The input and output types are specified in Table  1.   It filters out the left/right parenthesis, AND/OR, TRUE/FALSE, and NOT items out of the string list representing the input. The output should not have duplicates. An example is given below: Name:  getVariables Input:  ["(";  "a";  "OR";  "NOT";  "b";  ")";  "AND";  "(";  "b";  ")"] Output:  ["a";  "b"] You will also have to implement the function called “generateDefaultAssign- ments” .  It takes the list of variables,  and outputs a list of tuples, each tuple being a variable name and the boolean value “false” . It is used to initialize the set of variables to the “false” value. An example is given below: Name:  generateDefaultAssignments Input:  ["a";"b"] Output:  [("a",  false);("b",  false)] You will also have to implement the function called “generateNextAssign- ments” .  This function takes a list of string-boolean tuples, and returns an up- dated list of string-boolean tuples as if the singular binary number represented by the booleans is incremented by 1, as well as an additional boolean value carry for when it overflows.  Essentially, starting with the rightmost variable, if the checked variable is assigned to false, it is set to true.  If the checked variable is true, it is set to false and the algorithm carries to next variable to the left. The returned boolean value carry  is true only if the resulting binary number overflows the given space, i.e.  the algorithm reaches the leftmost variable and still carries. Name:  generateNextAssignments Input:  [("a",  false);("b",  false)] Output:  ([("a",  false);("b",  true)],  false) Input:  [("a",  false);("b",  true)] Output:  ([("a",  true);("b",  false)],  false) Input:  [("a",  true);("b",  true)] Output:  ([("a",  false);("b",  false)],  true) You will also have to implement the function called  “LookupVar” .   This function takes a assignment list of string-boolean tuples as well as a string, and returns the boolean value associated with the given string in the assignment list. Behavior for string values not in the assignment list will not be tested. Name:  LookupVar Input:  [("a",  false);("b",  true)]  "a" Output:  false 4    Advanced Function Implementations Now,  we will handle the boolean  satisfiability evaluation  function  and  data structures to represent the satisfiability function.  There are three functions you will have to implement here. Their types are specified below. 1. buildCNF: string list → (string * string) list list 2. evaluateCNF: (string * string) list list → (string * bool) list → bool 3.  satisfy: string list → (string * bool) list We will describe each of the functions below. 4.1 The buildCNF function This function builds a data structure to represent the CNF. It takes a string list as input, which you obtain from a string by calling “tokensListFromString” to break down a string.  Next, you can take advantage of the helper function “partition” built earlier, plus some further implemetnation to construct a list of (string*string) list, which represents a list of clauses from the CNF. Examples are shown below: Input:  ["(";  "a";  "OR";  "NOT";  "b";  "OR";  "c";  ")";  "AND";  "(";  "b";  ")"] Output:  [[("a",  "");  ("b",  "NOT");  ("c",  "")];  [("b",  "")]] Input:  ["(";  "NOT";  "a";  "OR";  "b";  ")";  "AND";  "(";  "NOT";  "b";  ")"] Output:  [[("a",  "NOT");  ("b",  "")];  [("b",  "NOT")]] Note that one literal could be the negation of a variable or the variable itself. The tuple string*string in the list specifies that. If a NOT is decorating a variable, then the second item in the tuple is  “NOT”; otherwise, it is an empty string. In the above example, there are two clauses, hence, there is a list of two lists, each list representing a clause.  Each list representing a clause consists of the components divided by the “OR” operator in the clause. 4.2 The evalauteCNF function This function takes the data structure of a CNF returned by the “buildCNF” function and an assignment list as input and returns a boolean variable.   It evaluates the variable assignment on the CNF; if it satisfies, it returns true; otherwise, it returns false. An example is given below: Input:  [[("a",  "");  ("b";"NOT")];  [("b",  "")]]  [("a",  true),  ("b,  false")] Output:  false Input:  [[("a",  "");  ("b";"NOT")];  [("b",  "")]]  [("a",  true),  ("b,  true")] Output:  true Input:  [[("a",  "");  ("b";"NOT")];  [("b",  "")]]  [("a",  false),  ("b,  true")] Output:  false 4.3 The satisfy function This function is the top-level function for the entire program. It takes a string list and returns the first variable assignment it finds that satisfies the CNF or an error (specified in Section 2). The order of trying variable assignment should be that you try the all-false assignment first, and then use  “getNextAssignment” similar to a carry adder to update the next variable assignment.   It  should terminate when it exhausts all possible variable assignments or finds the first one that works.  An example of input and output to the  “satisfy” function is given below. Name:  satisfy Input:  ["(";  "a";  "OR";  "NOT";  "b";  ")";  "AND";  "(";  "b";  ")"] Output:  [("a",  true);  ("b",  true)] 5 Code Package A code package is given to you.   You  will  need to implement everything in “project2.ml” . A few helper functions are provided in the “project2 driver.ml” . Please do not modify “project2 driver.ml” . All your own helper functions should go into “project2.ml” . We will test each function with independent inputs and outputs.  For instance, when you implement “getVariable” wrong, you will get no credit for  “getVariable” .  But we will use a correct  “getVariable” function to create inputs when we test others functions.  However, any functions called within other functions will behave defined by you, and are not guaranteed to work properly. For example, calling “getVariable” in the “satisfy” function used your implementation. Please do not modify the type signature of each function that you are re- quired to implement. Testing on An Interpreter You can test your code in the OCaml interpreter. If you can type “ocaml” in the terminal on anilab machine, it will invoke the interpreter environment. You can load your program into the OCaml interactive toplevel, and invoke the functions from the toplevel.  In the OCaml interpreter, enter the following commands in OCaml top-level: #  #use  "project2 .ml";; It will load the code in “project2.ml” into the environment. After you load this file, you can then call functions you have implemented in “project2.ml” .  To use these functions in the  “project2 driver.ml” file, enter the following commands in the interpreter: #  #mod_use  "project2 .ml";; #  #load  "str .cma";; #  #use  "project2_driver .ml";; Submission Please only submit the “project2.ml” file to GradeScope.

$25.00 View

[SOLVED] MATH49111/69111 Project 2a Neural Networks R

MATH49111/69111 Project 2a: Neural Networks 1 Introduction The idea that computers may be able to think or exhibit intelligence dates back to the dawn of the computing era.  Alan Turing argued in his seminal 1950 paper Computing Machinery and Intelligence that digital computers should be capable of thinking, and introduced the famous ‘Turing test’ to define what was meant by ‘thinking’ in this context:  the computer should be able to emulate a human mind.   He also suggested that a computer exhibiting human intelligence could not be programmed in the usual way (where the programmer has ‘a clear mental picture of the state of the machine at each moment in the computation’, as he put it) but would instead learn like a human, drawing inferences from examples.  Turing had previously identified that learning of this sort could occur in what he called an unorganised machine: a computational model of a brain consisting of many simple logical units connected together in a network to form a large and complex system. Although a computer emulation of a complete human mind is still some way off, Turing’s idea that a network could be taught and learn was prescient.  Models of this sort are now known as Artificial Neural Networks  (ANNs),  and the process by which they learn from examples is known as supervised learning, part of the large field of machine learning.  In the last decade or so, ANNs have developed to become the best algorithms available for a range of ‘hard’ problems, including image recognition and classification, machine translation, text-to-speech, and playing of complex games such as Go. In this project we will develop a simple neural network and train it to classify data into one  of two  classes,  a  so-called binary  classification problem.   (This  problem  is  discussed further by Higham and Higham (2019) whose notation we use in this project.)  Suppose we have a set of points in R2 , as shown in Fig.  1, acquired from some real-world measurements, with each point being of one of two classes, either a blue cross or a red circle: Figure 1:  A set of data points in R2 , each of which is  of one of two  types . This type of data could arise in many applications, for example the success or failure of an industrial chemical process as a function of ambient temperature and humidity,  or whether a customer has missed a loan payment as a function of their income and credit score. Given a new point x ∈ R2 , we’d like to be able to predict from the existing data which of the two classes this new point is likely to be in.  Since our example data shows a clear pattern (red circles tend to lie at larger values of x1  than blue crosses), we could sketch a curve by eye (as in Fig.  2) that divides the plane into two, with the red circles on one side and the blue crosses on the other. Figure 2: The plane is divided by a curve, with points on each side classified differently. Given a new point on the plane, we can now estimate which class it is in by seeing which side of the line it lies. We could then classify any new point x ∈ R2   by seeing on which side of the curve it lies.  Our neural network will perform this classification for us by (1) learning from existing data where the boundary between the two classes of data point lies, and  (2) allowing us to evaluate the likely class of a new data point by using the boundary found in step  (1). Specifically, the network defines a function F (x), where the curve dividing our two types of point is the zero contour F (x) = 0.  Regions where the network outputs a positive value F (x) > 0 correspond to one class, and regions where it is negative to the other class. Although this two-dimensional problem is relatively simple, the neural network developed extends naturally to much more difficult classification problems in higher dimensions (where x has hundreds or thousands of components), with more than two classes of points,  and where the boundaries between different regions are not as clear as in our example above. One example is character recognition, as used by banks and postal services to automatically read hand-written cheques and postcodes. In this case, a segmentation algorithm is applied to split the scanned text into lots of small images each containing a single digit.  Suppose each such small image has 25 × 25 = 625 greyscale pixels; the data in it can be represented by a vector x ∈ R625 .  A neural network of exactly the sort we develop can be used to classify a handwritten character encoded in the vector x into one of 36 classes (one for each of the characters 0–9 and A–Z) with very good accuracy. 2 Neural Networks: Theory 2.1 Overall setup An artificial neural network consists of a set of neurons. A neuron is a simple scalar functional unit with a single input and a single output, modelled by an nonlinear activation function σ  :  R  →  R.   In  the feed-forward networks we will  study,  these  neurons  are  arranged  in a  series  of  layers,  with  the  outputs  of  neurons  in  one  layer  connected  to  the  inputs  of neurons in the next,  as shown in Fig.   3.   The networks in this project will also be fully connected, meaning that nodes in two adjacent layers and the connections between them form a complete bipartite graph (every neuron in one layer is connected to every neuron in the next). Input layer      Layer 2          Layer 3     Output layer Figure  3: Fully-connected network  topology  with  two  input  neurons,  two  hidden  layers  of four neurons each, and one output neuron. The inputs to the neural network are specified at the first layer;  in our classification problem, this consists of the two components of the input vector x = (x1 , x2 ).  The output of the network is the output of the final layer of neurons; in our example this is just a single neuron, outputting a real number that we hope is close to either +1 or −1, depending on the class of the point.  In between are one or more hidden layers.  The network as a whole defines a nonlinear function, in this case R2  → R (two inputs, one output). As noted above, there are two stages to using a neural network. 1.  The network is trained by specifying both the input to the network and the desired (target) output.  Through an iterative procedure (“learning”), parameters relating to the connections between neurons (known as the weights and biases, defined below) are modified so that the network reproduces the target output for a wide range of input data.  The overall structure of the network (the number of layers, and the number of neurons in each layer) is not changed in the training. 2.  The weights and biases are then fixed, and the output of the network is evaluated for arbitrary new input data. We will consider initially the second of these steps,  assuming that the network has been pre-trained and that the weights and biases are known. 2.2 Evaluating the neural net output Since each neuron is a simple scalar function  (R → R), the multiple inputs into a neuron from neurons in the previous layer must be combined into a single scalar input.  To describe how this is done we first establish some notation.  Let L represent the total number of layers in the network and nl  represent the number of neurons at layer l, for l = 1, 2, . . . , L.  The input to the jth neuron at layer l is denoted by zj[l]  and the output from the jth neuron at layer l by aj[l] .  The statement that each neuron is modelled by an activation function σ can then be written algebraically as for l = 2, 3, . . . , L,    j = 1, . . . , nl ,                            (1) where the subscript. on the activation function implies that different layers may use different activation functions (but all the neurons in a layer use the same one).  The statement that the inputs to the neural network are specified at the first layer can be written as = xj ,    j = 1, . . . , n1 . Note that the input to the network xj   determines the outputs aj([1])  of the neurons in the first layer rather than their inputs, so  (1) is not evaluated for the first layer l  =  1.   The output of the network is the output of the final layer of neurons, aj[L] .  Consequently, the network has n1   (scalar) inputs and nL   (scalar) outputs.  The notation can be simplified by defining the neuron inputs and outputs as vectors z[l] , a[l]  ∈ Rnl , with components zj[l]  and aj[l]  respectively.  Then, defining the activation function to act componentwise, (1) becomes a[l]  = σl  (z[l]) ,         for l = 2, 3, . . . , L.                                           (3) The activation function is typically a nonlinear monotonically increasing function, but may have one of a number of different functional forms.  One common choice is the tanh function, σ(z) = tanh(z).                                                                (4) While (3) and (4) together define the behaviour of the individual neurons, the more sig- nificant part of a neural network is the set of connections between neurons, which determine how the input zj[l]   (of a neuron j in layer l) is determined from the outputs of the neurons in the previous layer.  Specifically, a neuron input zj[l]  is a biased linear combination of the outputs of the neurons in layer l − 1, for l = 2, 3, . . . , L,    j = 1, . . . , nl ,           (5) where wj[lk]   are the weights and bj[l]  are the biases at layer l.  These weights and biases are adjusted while the network is learning from the training data, but thereafter remain fixed. Defining b [l]  ∈ Rnl    as the vector of biases and W [l]  ∈ Rnl    × Rnl − 1    as the matrix of weights at layer l, with components bj[l]  and wj[lk]  respectively, we can write  (5) as z[l]  = W [l]a[l−1]  + b[l] ,        for l = 2, 3, . . . , L.                                     (6) Combining this with (3) and (2) we find a[1]  = x,                                                                                                                              (7) a[l]  = σl  (z[l]) = σl  (W [l]a[l−1]  + b[l])                    for    l = 2, 3, . . . , L.             (8) Equations (7) and (8) together describe the feed-forward algorithm for evaluating the output of a neural network given an input vector x (and known weights W [l]  and biases b[l]).  First (7) is used to define a[1] , then  (8) is used for l = 2, 3, . . . , L to obtain the neuron outputs a[2] , a[3] , . . . , a[L]  in turn.  The last of these, a[L], is the output of the neural network. 2.3 Training the neural network 2.3.1 Gradient descent The feed-forward algorithm outlined in the previous section allows the output of the neural network to be evaluated from its inputs, provided that the weights and biases at each layer of the network are known.  These parameters are determined by training the neural network against  measurements  or  other  data  for  which  both  the  input  and  target  output  of the network are known. This training data consists of a set of N inputs to the network, x{i}, with N corresponding target outputs, denoted y{i}  (i = 1, . . . , N).  We can formalise the idea of how closely our network produces the target outputs y{i}  (when given the corresponding inputs x{i}) by defining a total cost function, where ‘total cost’ refers to this cost being the sum of costs over all the N training data, and the squared L2  norm Ⅱ · Ⅱ2(2) of a vector is the sum of its components squared, ⅡxⅡ2(2)  = x1(2)  + x2(2)  + · · · + xn(2) .                                                    (10) In the cost function (9), for each training datum i the norm is taken of the difference between the  target  network  output y{i}  and  the  actual  output  of the  neural  network a[L](x{i}), evaluated from the corresponding network input x{i}  by using the feed-forward algorithm. If the network reproduces exactly the target output for every piece of input data, the cost function is equal to its minimum possible value, zero. For a given set of training data  (x{i} , y{i})   (i = 1, ..., N) , the cost C is a function of the parameters of the neural network, namely all the weights and biases for all of the layers. We could write this dependence explicitly by using  (8) recursively to express a[L](x{i}) in terms of a[1]  = x{i}  and the weights and biases of layers 2 to L, The network sketched above with nl  = (2, 4, 4, 1) has in total 9 biases (one for each hidden and output neuron) and 28 weights  (one for each connection between a pair of neurons), making the cost C dependent on 37 parameters in total. For ease of notation we denote by p ∈ RM   the vector containing the weights and biases at each layer that parameterise the network, where M is the number of such parameters (in this case M = 37).  We can then write C = C(p) without needing to refer to the weights and biases explicitly. The process of training the network is equivalent to minimising the difference between the desired  (target) and actual neural network outputs for the training data, or in other words, choosing p (and thus the weights and biases at each layer) so as to minimise the cost C(p), which is a nonlinear function of p. We will solve this problem using an iterative method based on the classical technique of steepest descent.  Suppose we have a current vector of parameters p and initial cost C(p), and wish to generate a new vector of parameters p + δp such that the new cost C(p + δp) is minimised. What value should we choose for δp? For small δp we can use a Taylor series to expand the new cost where pr   is the rth component of the parameter vector p, and terms of order  (δp)2   and higher have been neglected.  Equivalently, we can write this in vector form, C(p + δp) ≈ C(p) + (∇C(p)) · δp,                                            (13) where the gradient operator ∇ acts over each of the components of p (i.e.  over each weight and bias in the network), To minimise the new weight C(p + δp) we wish to make the final term of (13) as negative as possible.  The Cauchy-Schwartz inequality states that | (∇C(p)) · δp| ≤ ∥∇C(p)∥2 ∥δp∥2 ,                                            (15) with equality only when ∇C(p) lies in the same direction as  (is a multiple of) δp.   This suggests that to minimise C(p + δp) we should choose δp in the direction of −∇C(p), and so we take δp = −η∇C(p),                                                             (16) where  the  positive  constant  η  is  the learning  rate.   The  training  of  the  neural  network therefore starts by choosing some initial parameters p for the network (we will choose these randomly), and repeatedly performing the iteration p ← p + δp = p − η∇C(p).                                                  (17) We would like this iteration to approach the global minimum of C(p) within a reasonable number of steps.  Unfortunately, minimisation of a nonlinear function in high dimensions (recall that even in our simple network p has 37 components) is a fundamentally difficult problem. For sufficiently small η, the Taylor series approximation (13) will be valid and each iteration of (17) will decrease the cost function.  However, for this to be true in general, η has be arbitrarily small. In practice, setting η to a very small value means that that p changes only very slightly at each iteration, and a very large number of steps is needed to converge to a minimum.  On the other hand, setting η too large means that the approximation (13) is rarely valid and the choice of step  (16) will not decrease the cost function as desired. Choosing an appropriate learning rate η is a balance between these two considerations. Even if the iteration converges, it may converge to a local minimum of C, where ▽C = 0, but where C is nonetheless much larger than the global minimum cost minp (C(p)).  Finding the global minimum of an arbitrary nonlinear function in a high-dimensional space is near impossible.   We therefore abandon the goal of finding the global minimum of C(p),  and instead simply look for a value of p that has a cost C(p) less than some small threshold, τ , say. 2.3.2 Stochastic gradient descent Each  step  of the  steepest  descent  iteration  (17)  requires  an  evaluation  of ▽C(p)  which contains the derivatives of the cost function with respect to each of the weights and biases in the network.  Recalling the definition of the cost function (9), we write ▽C (18) where the contribution to the cost from each of the training data points i is ⅡyⅡ (19) Evaluation  of the  gradient  of the  cost  (18)  can  be  computationally  expensive,  since  the number of points N in the training data set may be large (many tens of thousands), and the number of elements in ▽C  (the number of parameters in the weights and biases) may be several million in a large neural network.  This means that it takes a long time to calculate each iteration of the steepest descent method (17). A faster alternative is to instead calculate the increment δp at each iteration from the gradient of cost associated with a single randomly chosen training point, p ← p + δp = p − η▽Cx{i} (p),                                               (20) a technique called stochastic gradient descent, or simply stochastic gradient.  In stochastic gradient the reduction in the total cost function at  (9) is likely to be smaller than in the steepest descent method (17) – in fact many steps are likely to increase the total cost slightly – but since many more iterations can be performed in a given time, the convergence is often quicker. There are several ways of choosing the training data point i to be used at each step, but we will use the simplest:  at each step randomly select  (with replacement) a training data point i independently of previous selections. 2.3.3 Evaluating ▽Cx{i} (p): Back-propagation To use the stochastic gradient algorithm  (20) we must evaluate ▽Cx{i} (p), the derivative of the cost function associated with a single training data point with respect to each of the network weights and biases, for    l = 2, . . . , L,                                                (21) for    l = 2, . . . , L.                                                (22) The dependence of the cost Cx{i}   on the weights and biases seems rather complicated, so an obvious temptation is to evaluate the derivatives by finite-differencing.  This is possible, easy to implement and highly recommended for validation/debugging purposes (see below). However the approach is also extremely expensive and not viable for practical computations with big neural networks. It turns out that the recursive nature of the network helps to simplify the analytical (and  efficient)  calculation  of the  required  derivatives  through  an  algorithm  called back- propagation.  Details can be found in the paper by Higham and Higham  (2019).  The key quantities required are known (for somewhat obscure reasons) as the errors, defined as for 1 ≤ j ≤ nl and    2 ≤ l ≤ L.               (23) They represents the derivative of the cost for our chosen training point i with respect to the input z to the jth neuron at layer l. Higham and Higham (2019) show that the errors can be computed by moving backwards through the network, starting with the output layer, l = L, for which δ [L]  = σL(′)(z[L]) ◦ (a[L]  — y{i}) ,                                               (24) where the operator ◦ is the componentwise (Hadamard) product, defined by u ◦ v = (u1 v1 , u2 v2, . . . , un vn )    for u, v ∈ Rn.                               (25) The Hadamard product can be avoided by writing equation  (24) using a product with a diagonal matrix, Having established the errors for the output layer (l = L), we then move backwards through the network using the recurrence δ [l]  = σl(′) (z[l]) ◦ (W[l+1])T δ [l+1]            for    l = L — 1 . . . , 2,                       (27) which can again be written as δ [l]  =  (I σl(′) (z1[l]) σl(′) (z2[l])     . . . for  l = L — 1, . . . , 2. σl(′) (zn[ll] )  , Note how this produces the errors in layer l from those already computed in layer l + 1. The real punchline is that the derivatives (21) and (22) can be expressed in terms of the the errors as                                        (29)forl=2, . . . ,L. This is not only very neat (much neater than one may have expected, given the complicated structure of the network) but also much more efficient than the evaluation of the derivatives by finite-differencing.

$25.00 View

[SOLVED] CDS533 Assignment 2 Statistics for Data Science SQL

CDS533 Assignment 2 (Statistics for Data Science) The assignment may consist of either two parts—a problem set and an R component— or just one part. Please follow the guidelines provided below for each. Problem Set: 1.    Manually solve each question and clearly demonstrate all necessary steps. 2.    You can either type out your answers in MS Word or provide scanned images of your written answers. 3.    Ensure that your handwriting is clear and legible, and that the quality of scanned files is sufficient. Unclear submissions will not be accepted. R Component: 1.    Provide  screenshots of R console outputs or insert R plots as required by the questions. 2.    For questions involving analysis based on R outputs, please provide detailed explanations to showcase your understanding. On final submission Personal Information: Ensure that your full name and student ID (SID) are written at the very beginning of the document. Final Submission Format: ●    Round your final answer in 3 decimals. ●    Your final submission, including the answers from the problem set and R component, should be merged into a single Microsoft Word document (.docx). Submission Deadline: Please upload your work in Moodle by 1:30pm 24th  Oct, 2024. NOTE: USE P-VALUE METHOD FOR HYPOTHESIS TESING PROBLEMS. 1.    Lengths of 9 randomly sampled oak seedlings from a given plantation are listed below: 2.58      2.43      1.98      2.62      2.40      2.96      2.36      2.77      2.54 Assume that the population of oak seedling lengths follows a normal distribution; let  μ  be the mean length for oak seedlings from this plantation and let  σ 2    be the variance. (a) Construct 90% confidence intervals for  μ   and interpret the result.    (b) Construct 95% confidence intervals for  σ 2    and interpret the result. (c) Suppose you obtained data on 36 seedlings. Suppose that the sample mean and variance are exactly the same as in (a). Construct a 95% confidence interval for the mean lengths of oak seedlings in that case. How does it compare to your answer for part (a)? 2.    A veterinary researcher claims that a new drug will be 70% effective in improving the condition of sheep suffering from a particular illness. To test this claim, a veterinary clinic tries the drug on 80 sheep suffering from the illness. (a) The results indicate that there was improvement in the condition of 50 sheep. Is there any evidence against the claim at α = 0.05? (b)  Suppose  now  the  experiment  had  been  conducted  with  320   sheep  and improvement was noted in the condition of 200  sheep. Test the claim in this circumstance at α = 0.05. (c) Let  p  be the “true” effective rate of the drug. Using the data from part (a), find a 90% CI for p. 3.    (a) Suppose we are sampling from a  N(μ, 16)  distribution. How large must  n  be so that a 90% CI for  μ  has length equal to 0.5? (b) Suppose you have a random sample from a  N(μ, σ 2 )  distribution with  σ 2 unknown.  Let  n  =  10 .  Consider  testing   H0  ∶  μ  = 22   versus   Ha   ∶  μ  ≠ 22 . Suppose you observe  ̅(x)  = 20.7 and  S 2   = 4.17. Consider testing this hypothesis by using confidence intervals. Do you reject  H0    at  α = 0.10?, at  α   = 0.05? (c) Using the data in part (b) of this problem, perform the T-test in the usual fashion. Use the pt command to find the exact p-value in R. Is this consistent with your results in part (b)? (d) Using the data in part (b) of this problem, use the qt command to find a 99.5% confidence interval for  μ . 4.    The following data are paired yields (in bushels) of two varieties of wheat grown on standard-sized plots. Each pair of plots was in a different location. The plots within a pair were immediately adjacent to one another. Location 1 2 3 4 5 6 7 8 9 10 Variety I 42.1 36.8 49.4 28.5 51.0 32.9 39.4 43.7 37.5 27.6 Variety II 44.3 38.1 49.4 30.5 52.8 33.7 38.2 47.8 39.1 28.5 (a) State (briefly) the assumptions you must make to proceed with an analysis of data of this form. (b) Perform (without using the computer) a test to determine whether the mean yield of Variety II differs from the mean yield of Variety I. (State hypotheses, give p-value, etc.) (c) Find a 99% CI for the difference between mean yields. 5.    A researcher wishes to compare the mean egg weight of two related species of laboratory birds. Nine randomly selected eggs are obtained from birds of each species with data given below. Species A 4.25 4.87 5.13 4.85 3.95 5.09 4.36 5.57 4.81 Species B 4.32 4.48 5.05 3.27 4.23 4.41 4.77 3.75 3.90 (a) State (briefly) the assumptions you must make to proceed with an analysis of this problem. Define all terms. (b) Perform (without using the computer) a hypothesis test of the claim that the two species have the same mean egg weight (versus the two-sided alternative). (State the hypotheses, give p-value, etc.) (c) Compute a 95% CI for the difference in mean egg weight between the two species. (d) Test the hypothesis that the mean egg weight of Species B eggs equals the mean weight of Species A eggs plus 0.5 (versus the 2-sided alternative) 6.    The  data  tobacco.csv  uploaded  in  Moodle  contains  a  data  set  on  a  genetics experiment for lengths of tobacco leaves. (a) UseR to make aplot that effectively compares the distribution of flower lengths between the F1 and F2 generations. (b) Use R to construct a 95% confidence interval for the difference in population mean flower lengths between the F2 and F1 generations. (c) Interpret this interval in the context of the problem. (d) Use R to test the hypothesis that mean flower length is equal for the F1 and F2 generations versus the alternative that is different. (e) Interpret this test in the context of the problem.

$25.00 View

[SOLVED] BIOC334/434 homework Bioinformatics Java

BIOC334/434 homework Bioinformatics (by Focco van den Akker, 2024) This is a homework problem that is actual original research and could lead to interesting discoveries. You’ll be trying to assign possible biological functions to proteins that have been labeled/annotated as having an ‘ Unknown Function’ . There are two proteins you’ll be researching. In the first part, you’ll select a protein with unknown function for which the structure already has been determined. In the second instance, the protein structure has likely not been determined, only its sequence is known. You need to print out and hand in a 6-10 page PowerPoint document with image snapshots of the name of the proteins, amino acid sequences, as well as all the key results you get from the bioinformatics websites as well as write a few sentences describing what each finding means. Then for each of the two proteins you studied, add a paragraph at the end summarizing what you have find out about it (i.e. likely structure, likely function, possible interaction partners, etc). Feel free to cut-and-paste multiple snapshot outputs on a single page to keep it within the page limits. Once finished and the Powerpoint file is saved, save it also as a PDF file (filename lastname_firstname_BIOCx34_HW.pdf) and email that .pdf file to me at  [email protected]. It is due Wednesday November 20th  at 11:59 pm. Please start early as some of the bioinformatics tools/servers can take a day before you get the results emailed back. 1. Research possible functions for protein with Unknown Function for which there is a structure known: Go to the PDB:www.rcsb.org Click on ‘Advanced Search’ (link just under the regular search box on the top). On the Advanced Search Query Builder section and then in the Structure Attribute section, click on the double down arrow ( ) to select ‘Structure Title’ .  In the search box to the right of that that searches for ‘contains phrase’ enter ‘ Unknown function’ and then hit the enter button. This should yield around 340 hits. Now select a structure as follows (so you all don’t pick the same one being the first listed): remember which day you are born on (i.e. the 15th  of March) and divide that by 2 (14/2 = 7.5). Round that number down to a whole number (if the number is 15, choose 14 as there are only 14 pages). Select page of hits by clicking  the arrows at the bottom. At that page, select the xth hit from the top that is your birth month (i.e., March = 3). After you have chosen the PDB target to analyze, select it by clicking on either the 4 digit PDB identifier/code or the title of the entry (please keep track of the 4 digit PDB identifier as you need it for input in subsequent bioinformatics tools (the 4 digit PDB identifier has both letters and numbers). Download the PDB file of the coordinates onto your computer: Click on ‘Download Files’ and select ‘PDB format’ . This will download a coordinate file with the extension .pdb. Now also download the amino acid sequence file: Click again on ‘Download Files’ and select ‘FASTA Sequence’ . This will download a sequence file with the extension .fasta.txt Do the following steps (please labeled the steps ‘1’, ‘2’, etc in your homework so I can tell what you completed): 1. First, let’s check if the model matches the experimentally determined electron density well so you are confident of the structure. Go towww.rcsb.orgof that PDB structure with the PDBid. Under the picture on the left showing the structure, click on ‘electron density’ . The structure in 3D view should show up. Click on a region in an alpha helix or beta-strand with a single left mouse button click and the density will show up (2Fo-Fc shows all atoms, Fo-Fc(+ve) is positive difference electron density where they should have added some atom(s);  Fo-Fc(-ve) is negative difference electron density where they should not have placed an atom(s). Just click on various regions in the protein (especially a ligand if present) and you will find that the inner core has better defined 2Fo-  Fc density and surface region likely have poorer density (more flexible/disorder). Snap an image of a good region of electron density and include it in your homework and mention in 1 sentence your confidence if the crystal structure was well built and refined. If you can find a definite error in the structure and show it in the figure, that would be very impressive but don’t worry if you cannot find an error. Btw. If you have chosen an NMR structure,  there will of course be no electron density to calculate; in that case, click to the right at wwPDB Validation -> 3D   Report and image snap some relevant details regarding the validity of this NMR structure. 2. Use the Dali Server to find similar structures (input the protein coordinate .pdb file), note that you should add  your email address as this server can take up to a day to email you the results. Being similar to another protein could yield clues to the structure of your protein and perhaps having a similar function as that protein): http://ekhidna2.biocenter.helsinki.fi/dali/ 3. Predict ligand binding sites on your protein using PrankWeb Server: https://prankweb.cz/ 4. Sometimes inside the crystal lattice (if you picked a crystal structure which is quite likely), there could be some molecules forming important dimer, trimer, or other types of oligomeric arrangements. Please check that out   using the following servers: http://eppic-web.org/ewui/ http://www.ebi.ac.uk/pdbe/pisa/ 5. Now go to the BLAST protein website to find database protein matches by using only the protein amino sequence to search with: http://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome This BLAST search might yield some clues about function if you detect weak homologs. Paste in the protein amino sequence that you downloaded from the PDB (the FASTA sequence file). Search the ‘Non-redundant protein sequence (nr)’ data base first. First, keep the Algorithm as ‘ blastp (protein-protein BLAST). Feel free to try also the PSI-BLAST and PHI-BLAST, and DELTA-BLAST algorithms (depending if obtained any useful hits or not). 6. Do a STRING search to find proteins that are predicted to interact with your protein (could also yield clues to the function): http://string-db.org/# 7. Paragraph summarizing the above results from points 1.1-1.6 2. Research possible functions for protein with Unknown Function for which there is no structure known: 1.  First, let’s make sure all students are not selecting and studying the same protein: let’s find an organism for which you’ll be studying a protein with Unknown Function based on your name. Go to: http://string-db.org/ click Search, and click on the down pointing triangle right of the box labeled “organism:” and select an organism that starts with either the first letter of your first name or last name by scrolling that list. Then, go to the PubMed site: https://www.ncbi.nlm.nih.gov/ and change the selection in the box in the left upper corner from ‘All databases’ to ‘Protein’ . For someone whose name started with an ‘A’ I selected for example, the organism group acetobacter. Then enter into the search box “ Unknown function [Title] acetobacter  “ (search by everything listed in red included the quotation marks around Unknown function but do change the organism name of course). The resulting list has the names   as well as the amino acid size listed (### aa protein). Select a protein from this list that is at least 200 aa (=amino acids) by clicking on the ‘FASTA’ link under the hit to download the amino acid sequence. 2.  Now go to the BLAST protein website to find database protein matches for which there are structures determined: http://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC=blasthome This BLAST search might yield some clues about similar structure/function if you detect weak homologs. Paste in the protein amino sequence that you downloaded above (the FASTA sequence file). Change the Database to be   searched in to ‘ Protein Data Bank proteins (pdb)’ so you are searching sequences for which a structure has been determined. Keep the Algorithm as ‘ blastp (protein-protein BLAST). Check out the results. 3.  Run PSIPRED on that amino acid sequence to predict what fold and function your protein has (select as many algorithms by checking the boxes): http://bioinf.cs.ucl.ac.uk/psipred/ ) 4.  Do the same using HHpred: https://toolkit.tuebingen.mpg.de/#/tools/hhpred 5.  As well as using InterProScan: http://www.ebi.ac.uk/interpro/search/sequence-search 6.    Do a STRING search to find proteins that are predicted to interact with your protein: http://string-db.org/# 7.  Build a homology model of your protein using either SWISS-MODEL, check the AlphaFold databased of modeled protein structures, or AlphaFold (not as easy to navigate): http://swissmodel.expasy.org/ or already modeled AlphaFold models in their database: https://alphafold.ebi.ac.uk/ or a de novo AlphaFold 3 prediction : https://alphafoldserver.com/about 8.    Paragraph summarizing the above results from points 2.1-2.7 From any of the above results, there are likely some clues about structural similarity to your protein via the modeled results. Download the coordinates from this hit either directly from the bioinformatics website or download it from the PDB. Then view the coordinates using protein structure viewers that are either embedded  in the results of the website or viewers like Pymol, Web3DMol, POLYVIEW-3D, COOT (just google the websites; if your protein did not yield any structural similarity hits, select the protein you selected from your Part 1 of this homework): Some of these are web browser-based viewers so you don’t need to install any program if that’s what you prefer. Inspect the structure using the viewer you selected and also test some of the viewer options. Take a few snapshots of this protein structure to include in your homework printout. 3. Only for BIOC 434 students: from the output of the BLASTP search of Section 1-6 above, select 4 sequences of varying sequence identity and save those 4 as FASTA sequence files (so not all ~99.9 % sequence identity but preferably much lower). Then make a multiple sequence alignment using Clustal Omega with your sequence and the 4 you obtained from the BLASTP search. Remember that you should add ‘>name’ above the sequence: https://www.ebi.ac.uk/Tools/msa/clustalo/ (see my lecture notes as well on this topic and input example). Only show the phylogenetic tree and the sequence identity table in your output (see my lecture notes) Attach the example homework at the end of this PDF. Remember for each of the outputs for everything you’ve done above, to take 1 or more image snapshots to highlight key results and describe in a few sentences what it means right next to it. Also, don’t forget to include final summarizing paragraphs for each of the proteins you studied to describe your bioinformatics findings regarding possible structure, function, etc. In your final paragraph, you could also include suggestions for experiments to test your function/structure predictions. Also, to be successful, you need to demonstrate that you can figure out how to understand the output of the various bioinformatics websites on your own by finding and reading the article that describes the bioinformatics tool you are using. Note that I have 1 example homework included on Canvas but that is an old example and does not have all of the items (7 for section 1, and 7 for section 2, and Section 3 for only BIOC 434 students).

$25.00 View

[SOLVED] ARC180 Computation and Design Assignment 2 Implying a City Matlab

ARC180: Computation and Design Assignment #2: Implying a City Selection from ‘Framework Houses’, Bernd and Hilla Becher In Assignment #1, you selected a typology of fabric building in Toronto and produced a Rhino model and drawing of one instance of the typology. In Assignment #2a, you will design a parametric model that can reproduce your chosen typology across a range of sites and scales. Your parametric model will use Toronto Property Data Maps as a primary input. Your model should accept a given site outline and build your typology by setting back from property lines. The model should also be driven by an additional parameter of your choosing—think roof slope, overall window/wall ratio, etc. Strong projects will include additional options that might activate in differing site contexts. For example, houses on narrow lots might deploy a staircase running parallel to the street, while houses on wider lots might run their staircase running perpendicular to the street. Think critically about how site informs your chosen typology and work backwards from its ‘building blocks’: where stairwells are located, or the minimum sizes of rooms, for example. First, produce a series of diagrams or sketches that describe five of these typological ‘building blocks’. You can include site setbacks and orientation in these sketches or diagrams; read as a whole, they should describe the step-by-step process by which your typology is constructed. Step-by-step development of the Les Closiaux gymnasium by Dominique Coulon & associés In addition to these diagrams, you will produce a design space drawing showing at least 9 examples of your typology with varying sites and parameters. Each example of your typology must be produced using your parametric model. You can choose the parameters and site types to vary, but the design space drawing should demonstrate a range of outputs that approximate the range of real-world examples of your typology. I suggest working with three different sites on the x-axis and a varied parameter in the y-axis. Design space drawing from ‘Dimensionality Reduction for Parametric Design Exploration’ by John Harding You can choose whether the model describes the typology’s interior, exterior, or both. You are permitted to switch typologies from your first assignment, if you choose. Submit the following two files to Quercus on November 5th by 11:59 pm Eastern time: 1. PDF document o On the first page, display your design space drawing, name and student number o On the second and following pages, show the following: § Images of at least four different examples of your typology, describing its deployment on varying site conditions § Five or more diagrams or sketches showing step-by-step development of your typology § A high-res image of your Grasshopper script. § A 150-250 word narrative describing your typological ‘building blocks’ and modelling process. 2. Your Grasshopper file in .GH format. All references to Rhino geometry must be internalized, including your initial link to the property data map. This assignment is worth 35% of your final grade. The mark breakdown is as follows: 15%: Organization and performance of your Grasshopper script. Does your script. open correctly? Does it accurately reproduce your typology on a variety of sites? How detailed are the models produced by your script? 5%: Success of step-by-step development diagrams or sketches Do your diagrams clearly communicate the design development of your typology? 10%: Success of design space diagram Does your design space diagram indicate the breadth of different typological outcomes? 5%: Clarity and creativity Does your written submission clearly your modelling process? Did you approach the assignment in a creative or conceptual way?

$25.00 View

[SOLVED] SCC361 Artificial Intelligence Coursework 2 Genetic Algorithms R

SCC361 Artificial  Intelligence Coursework 2 - Genetic Algorithms Date Released: Friday,  15th   November 2024 Date Due: Friday  13th   December 2024 at 4pm This coursework is worth 20 % of your overall mark 1. Introduction In this coursework, your task  is to  implement  a  Genetic Algorithm  (GA)  to address a designated problem, and subsequently present the results. The foundational concepts of GA were introduced and discussed during the week 6 lectures and the lab sessions in week 7 were dedicated to familiarizing you with GA. However, it's essential to note that this coursework involves applying GA to a distinct problem, different from those addressed in the lab sessions. 2. Problem Definition The task at  hand  involves  implementing  and  enhancing a  robot  motion  planning  algorithm  using Genetic Algorithms (GAs). Assume that you have a robot with an overhead camera. The first step in motion planning is to create a representation of the robot's operating environment. This involves constructing a map that includes information about obstacles, terrain, and other relevant features. For the purposes of this CW, we simplify the process by assuming the existence of such amap, already provided as input, with dimensions of 500 pixels by 500 pixels. An illustrative example of such a map is shown in the figure below. The next step is localisation, which refers to the process of determining the robot's precise position and orientation within its operating environment. It is a crucial aspect of robotics that enables a robot to understand and update its location relative to a given coordinate system or map.  Effective localization is essential for accurate navigation, as the robot needs to know where it is in order to plan and execute its movements successfully. For the purposes of this CW, it is presumed that both the starting point and the destination of the robot are provided as follows: start = [1, 1] finisℎ = [500,500] The primary objective of the CW is thus limited to path planning for the robot. Let a path be characterized by a fixed number of pointsnp  in the robotic map. As shown in the figure below for np  = 10, the pathis constructed by commencing from the start point [1, 1] and connecting it to the first point through a straight line. Subsequently, each point is connected to the next in sequence by straight lines until the final point is linked to the finish point [500,500]. To keep things simple, we're treating the robot asatiny point (or a pixel) instead of its actual size. 3. Problem Solving using Genetic Algorithms In formulating the problem as an optimization challenge for resolution by a Genetic Algorithm (GA), two key elements are essential: defining an objective function and specifying the variables of that function along with their bounds. The objective function, in this context, is the length of the path, where a shorter pathis considered more favourable. A penalty should be imposed if any part of the path intersects with an obstacle, with the penalty proportional to the pathlength within the obstacle. The optimization variables are the coordinates (x, y) of each of the fixed number of points along the path. The variable bounds are determined such that each point resides within the map. Specifically, the lower bound is set to 1, and the upper bound corresponds to the length or width of the map for the x andy axes (i.e., 500). Each of these points, arranged one after another, constitutes the genetic individual utilized in the optimization process. Each point in the path marks a point of turn. The total number of points “noOfPointsInSolution” should be considered as a parameter and should be equal to the maximum number of turns a robot is expected to make in the robot map. How to Read and Generate Random Maps In order to implement your code, you need to first read a map as a binary image as follows: map=im2bw(imread('random_map.bmp')); where 'random_map.bmp' is the name of an example map. A MATLAB script named “Generate_Random_Map.m” is provided to generate different (simple to complex) binary maps. Implement Genetic Algorithm Your main task involves the implementation, using MATLAB, of a genetic algorithm designed to discover an optimal path through an evolutionary process. It is important to note that the built-in ga function in MATLAB is not permissible for use in this task. Instead, you are required to develop an approach akin to what was covered in the lab sessions during week 7. Your algorithm should make use of appropriate crossover and mutation operators. Numerous design decisions, including the implementation details of the algorithm and the selection of parameter values, will be required. Your code should output animage displaying the final best path using the provided code below: path = [start; [solution(1:2:end)'*size(map,1) solution(2:2:end)'*size(map,2)]; finish]; clf; imshow(map); rectangle('position',[1 1 size(map)-1],'edgecolor','k'); line(path(:,2),path(:,1)); The solution is a row vector of size 2 × np. One possible output is shown below. Your code should include functionality to display both the execution time of the Genetic Algorithm (GA) and the total Euclidean distance of the optimal path. Important Notes 1.   You need to implement THREE selection methods including Roulette wheelselection (RWS), Tournament selection and Rank-based Selection. The code for RWS is available in Week 6’s lab material on Moodle. Conduct some research to gain an understanding of how Rank-based selection operates, and subsequently, incorporate this knowledge into the implementation. Note that for the Tournament selection method, you need to consider its variants/parameters as variables. 2.   You need to implement TWO appropriate cross-over operators. Note that t-point cross- over is counted as one method, and thus, changing k to 1, 2, 3 doesn’t count as three cross- over operators. 3.   You need to implement TWO appropriate mutation operators. 4.   Your algorithm should provide optimal or near optimal solutions for any combination of selection/cross-over/mutation. 5.    Upon initiation, the code should prompt the user to specify the types of selection, crossover, and mutation to be employed. For both crossover and mutation operators, the user should input a binary digit (0 or 1) to signify a specific type for each operator. For example, when selecting the crossover method, entering 0 signifies opting for crossover method 1, while entering 1 indicates the adoption of crossover method 2. Regarding selection, the user should input 0, 1, or 2 to indicate the chosen method. 6.   The code should output animage displaying the final best path similar to the one shown in the figure above. 7.   The code should be efficient and produces the required outputs. Try to use minimum number of loops in your code. 8.   The maximum number of iterations/generations and the size of population should be as small as possible. 6. Submission Please save your code in a zip file called “CW2_ID.zip”, substituting ID with your student ID. submit the code on Moodleonor before the deadline of 4pm on Friday, 13th December, 2024. The code could be submitted as a single .m file or with multiple separate function files and a “main.m” file. If for some reason, you cannot upload your submission, please zip it up and e-mail it to [email protected]@lancaster.ac.ukas soon as possible. Your submission should be anonymous and not include your name in the code. 7. Marking There area total of 20 marks for this coursework and the coursework mark constitutes 20% of your overall mark for this module. •    2 marks are allocated to the structure of the code. It should be well-structured and easy to understand. The code should be run by pressing the Matlab RUN button without requiring  any changes. •    10 marks are allocated to the correctness of the results. The code is expected to produce a solution that is close to optimal. Your code will be executed 10 times to assess how frequently it generates an optimal or near-optimal solution. The expectation is to achieve optimal results in 80% of the runs. •    2 marks are allocated to the commenting. There should be enough comments to understand the code. A report is not required for this coursework; the code should be well-commented with enough detail on methods used/ implemented. •    6 marks are allocated to time complexity of the algorithm (in all settings). The code should prioritise time efficiency and use matrix operations for computational advantage. The maximum number of iterations/ generations and population size should be minimised for optimal performance.

$25.00 View

[SOLVED] ECON 425 Topics in Monetary Economics The International Monetary System from the Gold Standard t

ECON 425 Topics in Monetary Economics: The International Monetary System from the Gold Standard to War in Ukraine Final Exam December 9, 2024 Instructions: You have six hours to work on this exam.  It is worth 100 points, contributing to your overall score for the course as described in the Syllabus. You may consult all course materials and standard Internet resources while working on the exam, but your work must be original and you may not solicitor obtain assistance from or provide assistance to other people for any part of the exam (this includes obtaining help from artificial intelligence). Activities considered cheating include copying or closely paraphrasing content from websites,  discussing  exam  questions  with  other  students, and/or using ChatGPT or similar tools.   All  exams  will be checked for originality and copied content, and anyone found cheating will be assigned a zero score for the exam.  Read carefully each part of each question before you jump into working on it and do not panic if you cannot complete everything.  The exam is also intended to stretch your knowledge by forcing you to use the tools and information you have acquired.  I want to see how you think about issues based on what you learned. If there is a question you cannot answer, do not get bogged down and move on to the next question. Write legibly. Problem 1: International Monetary Policy Cooperation after a Crisis (50 Points) We can use a slightly modified version of the model you analyzed in Homework 4 to study interna- tional monetary policy cooperation in response to a Önancial crisis. Suppose we modify the money demand equations for the Home and Foreign countries in the model by introducing money velocities v and v*  as follows: m + v   =   p + y;                                                             (1) m* + v*     =   p* + y* :                                                          (2) Velocity is the rate at which agents in each country dispose of money for transactions.  For given money supply, higher velocity translates into higher prices and/or output. The introduction of velocities v and v*  is the only change we make to the Homework 4 model. Hence, refer to Homework 4, pages 1 and 2, for the description of the rest of the model. Assume that velocity in each country is an independently and identically distributed exogenous shock with average value of zero, like the exogenous productivity shock x in the production function (as for other variables,  v  and  v*   measure  the percent deviations of Home  and Foreign money velocity from the zero-shock equilibrium).  We take velocity as the indicator of the situation of credit markets: A credit market collapse results in a drop in velocity (negative realizations of both v and v*  in aglobal credit crisis) as agents have an incentive to hold on to liquidity and the number of transactions in the economy drops. As in Homework 4, assume that Home wage setters chose wat time -1 to minimize E-1  (n2 )=2, and similarly in Foreign. Also continue to assume that the central bank in each country wants to stabilize CPI and employment at their zero-shock levels, and it minimizes the same loss function as in Homework 4 (bottom of page 3). ● Show that the following results hold in our extended model with velocity shocks (it is enough to show this for one country, say, Home): w   =   E-1 (m + v) = 0; w*     =   E-1 (m* + v* ) = 0; n   =   m + v; n*     =   m* + v* ; y   =   (1 - α) (m + v) - x; y*     =   (1 - α) (m* + v* ) - x p   =   α (m + v) + x; p*     =   α (m* + v* ) + x: Note that there are two ways to show this: one is by using  ''brute force'' and doing the algebra, the other is by being smart, thinking carefully about the one change we made relative to the Homework 4 model, and what it should imply for these equations and those in the next bullet relative to results in Homework 4.  Feel free to use the smart strategy if you figure it out, but explain it clearly. Note: A credit crisis (a drop in velocity) puts downward pressure on employment, output, and prices. ● Show (by doing the algebra or by being smart) that the nominal exchange rate and the terms of trade are determined by: and Home and Foreign CPIs are: ● Assume equal country size (a = 1=2) and show that the Örst-order condition for the optimal choice of money supply by the Home central bank under non-cooperation is: and that it can be rewritten as: Define the following coefficients: Then, we can rewrite equation (3) as: or: ● Explain the signs of the Home central bankís responses to Foreign money supply, Foreign money velocity, Home money velocity, and the productivity shock.  Note: Explain does not mean  ìstate in words.î It means explain why the sign is what it is. ● Show (by brute force or by being smart, but explaining it) that the foreign central bankís behavior is determined by the reaction function: ● Show that the Nash equilibrium level of Home money supply is: If you use the expressions for H1(N)  and H2(N), you can verify that: Take this for granted. I am not asking you to prove it. Taking the expression for H3(N)  into account, it follows that: ● Show (by brute force or by being smart, but explain it) that the Nash equilibrium level of Foreign money supply is: ● What is the intuition for how the central banks respond to velocity shocks in the Nash equilibrium? ● If x = 0, what are the Nash equilibrium values of the central banksíloss functions LCB  and LCB* ? ● What is the intuition for these values? ● Show that the first-order  condition  for the optimal  choice of  m when the two central banks  coordinate their policies  (i.e., jointly  minimize a  combination of the loss functions with weights equal to 1/2) is: Proceeding as in the non-cooperative case, we can rewrite equation (8) as:  where we define: This implies the cooperative setting of m according to: ● Show (by using brute force or by being smart, but explaining it) that the Örst- order condition for the cooperative choice of m*  yields: ● Show (by using brute force or by being smart, but explaining it) that the solution for Home money supply under cooperation (mC ) is: Using the expressions for H1(C) and H2(C) shows that H1(C) - H2(C) = 1 - √ (1 - α2 ).  Hence, taking H3(C)  = √ α into account, we have: ● Show (by using brute force or by being smart, but explaining it) that the solution for Foreign money supply under cooperation (m*C) is:: ● Why do the central banks respond to velocity shocks in the same way as they did in the Nash equilibrium? ● If x = 0, is there anything to be gained from international monetary cooperation in response to the velocity shocks U and U* ?  Why? However, the responses to the productivity shock x differ between Nash and cooperative equi- libria.   In  particular, we can verify that the cooperative responses are less aggressive than the non-cooperative ones (I am not asking you to verify it). ● What explains the reduction in policy aggressiveness when central banks coop- erate in responding to x? Now remember what we learned from Ben Bernankeís article on the Great Depression: Credit market crises have negative supply-side e§ects as asymmetric information issues make access to credit harder and prevent Örms from operating e¢ ciently.  In our model, we can capture this by positing that the shock x, instead of being purely independent from U and U* , depends on these variables:  x = x (U, U*). In particular, suppose that, when U and U*  become negative (a global credit crisis),   becomes positive (a productivity loss).  Suppose that parameter values are such that the overall response of monetary policy to the combination of velocity  and productivity shocks in each country is expansionary in both the Nash equilibrium and the cooperative one. ● How does the response to the combined shocks in the cooperative equilibrium di§er from that in the Nash equilibrium (is it more or less expansionary)? ● Why? Bottom line: Using a small extension to the Homework 4 problem and remembering what we learned from Ben Bernanke o§ers a possible explanation for why central banks may find it desirable to coordinate their responses to global credit crises. Problem 2: A Simple Theory of Exchange Rate Random Walks (30 points) Richard Meese and Kenneth Rogo§ argued in a 1983 Journal of International Economics paper that the exchange rate models written in the 1970s (like Rudiger Dornbuschísovershooting model) could not predict the future exchange rate better than a random walk, i.e., better than just taking the current exchange rate as our best forecast of the future exchange rate. This result was a major stumbling bloc for exchange rate theories for decades because it was viewed as inconsistent with theory.  This problem asks you to explore a simple theoretical model that generates random walk behavior of the exchange rate. Consider two countries, Home and Foreign, and suppose uncovered interest rate parity (UIP) holds, so that: it - it(*) = Et("t+1) - "t ; where it  and it(*) are the Home and Foreign nominal interest rates, "t  is the exchange rate (units of Home currency per unity of Foreign), and Et  is the expectation operator conditional on information available at time t. Assume that the Home and Foreign central banks set their interest rates in response to CPI ináation according to the policy rules: it = Tπt + ξt and: it(*) = Tπt(*) + ξt(*) ; with T > 1, where πt  and πt(*) are the Home and Foreign CPI ináation rates, and ξt  and ξt(*) are iden- tically and independently distributed random shocks such that Et(ξt+1) = Et(ξt(*)+1) = 0.  Ináation rates are deÖned by πt pt -pt-1  and πt(*) pt(*) -pt(*)-1 , where pt  and pt(*) are the Home and Foreign CPIs. Assume also that purchasing power parity (PPP) holds, so that: pt - pt(*) = "t: ● Use the policy rules of the central banks, the definitions of ináation rates, and PPP to show that policy behavior by the central banks implies: it — it(*) = T("t — "t-1 ) + ξt — ξt(*): ● Combine this equation with the UIP condition to show that the exchange rate satisÖes the equation: Et("t+1) — "t = T("t — "t-1 ) + ξt — ξt(*):                                      (11) Trust me on the following statement: Given the assumption T > 1, this equation has a unique solution for the exchange rate in every period t that has the form. "t = "t-1 + η (ξt — ξt(*)) ;                                                       (12) where η is a parameter (to be determined) that measures the responsiveness of the exchange rate to the di§erence between exogenous Home and Foreign interest rate shocks. ● What do this form of the solution for the exchange rate and the assumptions we made above imply for Et("t+1)? For (12) to be the solution for "t, we must find the value of η such that, if we substitute (12) into (11) for "t, we obtain 0 = 0. ● Substitute (12) into (11) for "t, use the result you obtained in the previous bullet, and show that it must be: This result implies that the solution for the exchange rate in the model of this problem is: ● Is  this  solution  consistent  with  your  intuition  for  what  should  happen  to  the exchange  rate  if a  central  bank  imparted  a  contractionary  shock  to  monetary policy? Brie'y explain your answer. ● Is the solution for the exchange rate consistent with Meese and Rogoff's finding mentioned at the beginning of this problem? Essay Question:  The Case for Flexible Exchange Rates and the Role of Nominal Rigidity and Currency of Price Setting (20 Points) Using a maximum of two letter-size pages, explain Friedman's case for flexible exchange rates, and why price stickiness and currency of price setting matter.  If you type your essay, use double spacing and a 12-point font.

$25.00 View

[SOLVED] CDS533 Assignment 3 Statistics for Data Science SQL

CDS533 Assignment 3 (Statistics for Data Science) The assignment may consist of either two parts—a problem set and an R component— or just one part. Please follow the guidelines provided below for each. Problem Set: 1.    Manually solve each question and clearly demonstrate all necessary steps. 2.    You can either type out your answers in MS Word or provide scanned images of your written answers. 3.    Ensure that your handwriting is clear and legible, and that the quality of scanned files is sufficient. Unclear submissions will not be accepted. R Component: 1.    Provide screenshots of R console outputs or insert R plots as required by the questions. 2.    For questions involving analysis based on R outputs, please provide  detailed explanations to showcase your understanding. On final submission Personal Information: Ensure that your full name and student ID (SID) are written at the very beginning of the document. Final Submission Format: ●    Round your final answer in 3 decimals. ●    Your final submission, including the answers from the problem set and R component, should be merged into a single Microsoft Word document (.docx). Submission Deadline: Please upload your work in Moodle by 1:30pm 18th Nov, 2024. NOTE: USE P-VALUE METHOD FOR HYPOTHESIS TESING PROBLEMS. 1.    A study of MBA graduates for The American Graduate Survey 1999 revealed that MBA graduates have several expectations of prospective employers beyond their base pay. In particular, according to the study 46% expect a performance-related bonus, 46% expect stock options. 42% expect a signing bonus, 28% expect profit sharing, 27%  expect extra  vacation/personal  days, 25%  expect   tuition reimbursement, 24% expect health benefits, and 19% expect guaranteed annual bonuses. Suppose a study is conducted in an ensuing year to see whether these expectations have changed. If 125 MBA graduates are randomly selected and if 66 expect stock options, does this result provide enough evidence to declare that a significantly higher proportion of MBAs expect stock options? Let α = 0.05. If the proportion really is 0.50, what is the probability of committing a Type II error? And what is the power of this test? 2. Using R to complete this question. An American karate studio plans to advertise but is unsure as to which of three ads to use. The ads are tested on randomly selected consumers and the reactions measured on an ordinal scale that produces the following data: Red 80,80,78,81,72,85,96,84,71,75,98 White 75,55,98,92,86,78,87,79,88,87,85,94,99 Blue 72,76,70,77,68,82,85,81,65,69 Test the claim that reactions are the same for the three different ads. Perform. the Kruskal  Wallis  Test.  If appropriate,  follow  with  Wilcoxon  tests  to  make  the decision which ads should be given the contract, and interpret your results. 3.    An experiment was conducted to compare the wearing qualities of three types of paint. Ten point specimens were tested for each paint type and the number of hours until visible abrasion was apparent was recorded. Assume that the variances are not significantly different, that the distributions are approximately normal, that the measures are numerical. Is there evidence to indicate a difference in the three plant types? Construct the ANOVA table, show your calculation. Each group has  10 readings with the following statistics: a)    Compute an ANOVA table for these data (using a hand calculator), including all relevant sums of squares, mean squares, and degrees of freedom. b)    State the statistical model underlying the procedures in the analysis of variance as applied to these data. Define symbols used and make clear all distributional assumptions. c)    State  in words and  symbols the null hypothesis and alternative hypothesis appropriate to this problem. Compute the relevant F-test and find the p-value for the test. 4.    Suppose we have a data set with five predictors,  X1= GPA,  X2= IQ,  X3= Level (1 for College and 0 for High School),  X4    = Interaction between GPA and IQ, and X5   = Interaction between GPA and Level. The response is starting salary after graduation (in thousands of dollars). Suppose we use least squarestofit the model, and get  β(̂)0   = 50, β(̂)1  = 20,β(̂)2  = 0.07,β(̂)3  = 35, β(̂)4  = 0.01, β(̂)5  = −10 . a)    Write out the regression equation. b)    Which answer is correct, and why? i. For a fixed value of IQ and GPA, high school graduates earn more, on average, than college graduates. ii. For a fixed value of IQ and GPA, college graduates earn more, on average, than high school graduates. iii. For a fixed value of IQ and GPA, high school graduates earn more, on average, than college graduates provided that the GPA is high enough. iv. For a fixed value of IQ and GPA, college graduates earn more, on average, than high school graduates provided that the GPA is high enough. c)    Predict the salary of a college graduate with IQ of 110 and a GPA of 4.0. d)    True or false: Since the coefficient for the GPA/IQ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer. 5. Using R to complete this question. This question involves the use of simple linear regression on the “Auto” data set. a)    Use the lm() function to perform a simple linear regression with mpg as the response and horsepower as the predictor. Use the summary() function to print the results. Comment on the output. For example: i.       Is there a relationship between the predictor and the response? ii.       How strong is the relationship between the predictor and the response? iii.       Is  the relationship between the predictor and the response positive or negative? iv.       What is the predicted mpg associated with a horsepower of 98? v.       What is the associated 95% confidence intervals of β1 ? b)    Plot the response and the predictor. Use the abline() function to display the least squares regression line. c)    Use  the plot() function  to  produce  diagnostic  plots  of the  least  squares regression fit. Comment on any problems you see with the fit. 6.    A substance used in biological and medical research is shipped by airfreight to users in cartons of 1,000 ampules. The data below, involving 10 shipments, were collected on the number of times the carton was transferred from one aircraft to another over the shipment route (X) and the number of ampules found to be broken upon arrival (Y). Assume the first-order regression model  Y   =  β0    +  β1x + ε is appropriate. i1020310120Y169171222138151911

$25.00 View

[SOLVED] MANG1053- INTRODUCTION TO MARKETING

MANG1053- INTRODUCTION TO MARKETING Guidance on Format/Presentation: Format: The text should be no more than 3,000 words (+/- 10%). Assignments which significantly exceed the word limit (more than 10%) will be penalised. Title page, table of contents & list of references will NOT be included in the word count. Please Times New Roman size 11. You can use bold, italic and/or underline text to highlight key information and/or for titles. Structure: The report should be clearly written and logically structured. You should provide a ‘REPORT’ structure. Please follow the assessment brief for the structure of your report: See Appendix with example. Illustrations: Please illustrate your report with the required marketing models (see brief), academic and business in-text references, and relevant visual data, (logo, graphs, photos, charts...) to make it visually impactful and professional (but ONLY if adds value to report). All the figures, charts, tables and pictures need a number, a title, and a source. Referencing: The Harvard referencing system should be adopted. You need to add in-text citation. For each in-text citation, you need to write the corresponding reference in Part E in Harvard referencing format. Examples: In-text reference for an article: “The universal value of beauty High aesthetics are almost always preferred by consumers (Page and Herr, 2002).” Corresponding reference for article: Page, C., & Herr, P. M. (2002), ‘An investigation of the processes by which product design and brand strength interact to determine initial affect and quality judgments’, Journal of Consumer Psychology, 12(2), pp.133–147. In-text reference for website: “Gucci’s multi-patterned tiger emblazoned sweaters, that has been, is currently, and is expected to continue to be successful (The Front Row, 2020).” Corresponding reference for website: The Front Row. (2020). Why is Fashion so Ugly?. Retrieved from https:// medium. com/@ huang akil8/ why- is fashion- so- ugly- 1fb67 669dc 6e. Accessed 15 Jan 2021 Recommended Sources: Only recent data! Industry Reports: Bain & Company, McKinsey, Deloitte, Mintel, Monitor, Statista (Some   available on the library website). Academic Sources: Textbook: Kotler, P. et al. (2019), Principles of Marketing. Pearson Higher Education Library Website: Select Subject-> Marketing -> Business Source Complete -> EBSCO -> Journal of Business Research, Journal of Marketing, Journal of Consumer Research… Guidance on Content: Please choose a brand you like but You cannot choose Amazon, Google, Mc Donald, Starbucks,  Coca Cola, or Apple. The assignment should be in a report format, and you are required to use marketing theory to frame. your answer throughout. You are advised to read marketing textbooks and academic journal articles to identify relevant theory. Class material should be used, including models, matrices seen in class but only to illustrate a statement. You are advised to write in the 3rd person. Try as much as possible to be analytical rather than just descriptive.

$25.00 View

[SOLVED] EE E4321 Fall 2019 Final Examination Java

Department of Electrical Engineering Final Examination EE E4321.  Fall, 2019 December 16, 2019 Time: 180 minutes Total: 250 points 1.   Consider an inverter operating with a supply voltage of VDD  with logic  signals that  swing between ground and VDD.  Let VT  = 0.32 V for the nFET and -0.32 V for the pFET.  Let the above-threshold  drain  current  be  governed  by  the  equation  ID=WCoxvsat(VGS-VT),  where WCoxvsat = 1 mA/V, for the nFET and by the equation ID=WCoxvsat(VSG+VT), where WCoxvsat = 1 mA/V, for the pFET.  For the nFET, Ioff  is defined by VGS= 0.0 V and VDS  = VDD.  For the pFET, Ioff  is defined by VSG  = 0.0 V and VSD  = VDD.   For the nFET, Ion  is defined by VGS=VDD and VDS=VDD. For the pFET, Ion  is defined by VSG  = VDD and VSD = VDD. (a) Let VDD  = 1.2 V.  If the current of the nFET (pFET) is 100 μA at VDS  (VSD) of VDD  at a VGS  (VSG) of 0.32 V, what is the off-current (Ioff)?    Assume a subthreshold slope of 90 mV/decade and ignore DIBL.  What is the Ion/Ioff ratio for each transistor?  (10 points) (b) Now let VDD = 0.2 V.  What is the Ion/Ioff ratio for each transistor? (10 points) (c) If this inverter drives a load of 10 fF, estimate the delay of this inverter at VDD  = 0.2 V. (10 points) (d) If the inverter (with VDD  = 0.2 V) is stimulated with a 50%-duty-cycle square wave at its input switching between 0 and 0.2 V with a period of 1 nsec, what is the average power consumed by this inverter? (10 points) 2.          Consider short-channel effects.   Remember that device VT  is strongly affected by bulk charge under the channel. (a)        One typically finds that the magnitude of the threshold voltage of a short-channel device decreases with increasing VDS.  Why is this? (10 points) (b)       Before  the  advent  of advanced  halo  doping  techniques,  one  found  that  circuits  with smaller channel lengths had threshold voltage of lower magnitude.  Why was this?  This is the effect we described in class as “VT rolloff.” (10 points) (c)       Modern  device  technologies  have  introduced  halo  doping.      In  this  case,  the  region directly under the source and drain is more heavily doped than the substrate as a whole.    This helps control the extent of the source and drain depletion regions.   The  case  of the nFET is shown below.  In the presence of halo doping, one generally finds an “inverse” rolloff in which smaller channel lengths lead to higher magnitude threshold voltages.  Why might this be? (10 points) 3.         Consider the  carry chain of the four-bit  static ripple carry adder shown below.   Cload models the loading of the sum circuit at each bit.  For this technology, transistors are assumed to provide  kgate   =   1  fF/μm  of  loading.    Assume  that  the  fanout-of-one  (FO1)  delay  for  this technology is τ1  = 25 psec. (a)       If W = 1 μm, what is the delay of the carry chain from Cin  to Cout?  (10 points) (b)       If VDD   =  1.2 V  and  W =  1  μm, what is the energy dissipated by a single propagated transition through the carry chain?  (10 points) (c)       What  value  of W will  minimize the delay of this carry chain?   What  energy  will be dissipated at this value of W for a single transition through the chain?  (9 points) (d)       Repeat the calculation of (c) if Cload  = 0. (10 points) 4.          Sketch the waveforms for scan_enable, clock, scan_in, and scan_out for a test sequence to scan in a test pattern, clock the system once, and scan out the resulting pattern such that a stuck-at-one fault at node X could be detected.   Show  where this would be  detected in your waveform. (20 points) 5.          Consider the clock chopper circuit shown below.   The delay block (labelled ∆) can be implemented with an even number of inverters. (a)        Consider  two  clock-chopped  flip-flops  separated  by  combinational  logic  as  shown below.  Assuming that the combinational logic represents a fixed delay of 200 psec and there is skew between CLK1 and CLK2 such that CLK2 arrives 100 psec earlier than CLK1, what is the maximum value ∆ (the pulse width of the chopped clock) can have and still ensure correct functionality?   Assume that the latches have a hold time of 50 psec on their closed clock edge. (10 points) (b)       Assume that this chopped clock with the pulse width calculated in (a) is driven from an inverter with a load capacitance of 200 fF.     The inverter is sized 4 μm / 2 μm in a technology with a FO1 delay of τ1  = 25 psec.  Assume a gate capacitance of 2 fF/μm.  Sketch the waveform. of the clock as received by the latch.   What will be the “height” of the pulse?    What problems might this present? (12 points) 6.        Consider the following dynamic PLA structure. (a)       We discussed in class how it is necessary to derive the clocks φAND   and φOR   in such a manner that the OR plane does not being to evaluate until after the AND plane has evaluated. Furthermore, both φAND  and φOR  must allow for adequate precharge time.    If the clocks satisfy this relationship, what is the logic function of this PLA (i. e, what are y0  and y1  as a function of x0, x1, and x2)? (10 points) (b)       Design a replica AND row, two instance of which can be used in the circuits below to generate φAND  and φOR to meet the timing constraints described in (a). (8 points) 7.         Consider a long global wire from driver A to receivers B and C as shown below.  Assume that the wire is 0.4 mm wide and has a resistance of 0.076 O/square.   Its capacitance per unit length is given by: Assume that each receiver has a load of 100 fF. (a)        Explain why the capacitance has the functional form specified above. (8 points) (b)       Calculate the Elmore delay from A to B. (10 points) (c)       Assume the Elmore delay calculated in  (b).   If the FO1 delay of this technology is  50 psec, would resistance have to be considered for this line in calculating the delay from A to B? Explain. (8 points) (d)       Now imagine widening the wire from A to X from 0.4 μm to 10 μm.  What is the Elmore delay now from A to B? (10 points) 8.         Consider the following path from a dynamic gate through two static gates.    Assume that the technology is characterized by a gate capacitance per unit width of 1 fF/μm.  Let the FO1 delay of this technology be τ1  = 25 psec. (a)       Estimate   the   delay  form  A  rising  to  E  rising  in  the  evaluate  phase  (φ =   1)   if W1=W2=W3=W4=W5=W6= 1 μm.  Let Csidebranch  = 0. (10 points) (b)       Keeping W1  fixed at  1 μm, resize the network to minimize the delay from A rising to E rising with Csidebranch  = 0. Please do not allow the beta ratio (ratio of pFET strength to nFET strength) for any gate become more than 4 or less than 0.25. (15 points) (c)       Using the sizing of (b), what is the delay from A rising to E rising with Csidebranch  = 200 fF. (10 points) 9.          Joe Engineer is designing a simple pass-transistor latch as shown below. He finds that he is unable to write the latch; that is, when clk is high, Y is not affected by the value of A.  What has gone wrong?   How can this be fixed? (10 points)

$25.00 View

[SOLVED] ARC180 Computation and Design Assignment 3 Typological Intelligences Java

ARC180: Computation and Design Assignment #3: Typological Intelligences Barton Myers: Sectional Perspective Looking West, Yorkville Public Library, Toronto In Assignment #2, you designed a parametric model that can reproduce a chosen typology across a range of sites and scales. For Assignment #3, you will design a contemporary response to your chosen typology. Crucially, you will design a system of possible approaches using parametric modelling techniques in Grasshopper. Your system can operate at a variety of scales, but on a technical level it should produce an array of outcomes that vary in response to a changing input—in this case, the boundaries of a site and at least one other varying parameter of your choosing. Students are encouraged to incorporate one or more of the conceptual threads from our seminar discussions. Your response could be a renovation, addition, or reinvention, but it must be related in some legible way to your chosen typology. You can retain as much or as little of the current typology as you wish, but your project must clearly demonstrate both your design contribution and its relation to your chosen typology. Responses to typology: NADAA, previous page; Ryan W. Kennihan, top; Gordon Matta-Clark, bottom. Demonstrate how your new architectural response fits on the three sites you chose for Assignment #2. You will do this by preparing a design space diagram, as in Assignment #2, that shows how one or more parameters of your project can be varied on three sites. The design space diagram should show nine versions in total on three different sites. In addition to the diagrammatic requirements, prepare a rendering or collage showing one version of your design. If you choose to use text-to-image AI as part of your workflow, be sure to substantially rework the outputs to make the drawing your own. Your script. and Rhino file are due one week in advance of the final deadline, on December 16th at 11:59pm. Submit the following two files: 1. Your Grasshopper file in .GH format. All references to Rhino geometry must be internalized, including your initial link to the property data map. 2. Your working Rhino file. Late marks for this component are reduced to 1% of your assignment mark per day. Please submit the following one file to Quercus on December 23rd at 11:59pm: 3. PDF document o On the first page, display your rendering or collage, name and student number o On the second and following pages, show the following: § Your design space drawing § An image of your Grasshopper script. § A 150-250 word narrative describing your design process and conceptual approach. Note that December 23rd is the last day that term work can be submitted. We are unable to accept work submitted past this date or extend the deadline further—it’s already been extended to the last possible hour. This assignment is worth 45% of your final grade. The mark breakdown is as follows: 10%: Organization and performance of your Grasshopper script. Does your script. open correctly? Does it accurately reproduce your design on a variety of sites? How detailed are the models produced by your script? 10%: Success of design space diagram Does your design space diagram indicate the breadth of different possible design outcomes? Are the outputs of the design space diagram sufficiently varied? 15%: Design Does your proposal reflect a high level of creativity and design ability? Does your collage or rendering effectively communicate your design idea? 10%: Clarity and creativity Does your written submission clearly your modelling process? Did you approach the assignment in a creative or conceptual way?

$25.00 View

[SOLVED] Individual Essay 2500 words Review and application Integrated STEM Education

Individual Essay (2500 words) Write an individual essay on “ Review and application - Integrated STEM Education”. The essay is expected to comprise: a)    A literature review on the theoretical considerations in promoting integrated STEM education b)    Searching the background and the content of an integrated curriculum (local or overseas) and discuss how it is aligned with the goals and rationale for STEM education. c)    Analysis and evaluation of the strengths and limitations, and its impacts(if available) of the curriculum in achieving the goals of STEM education based on your review in (a). d)    Sharing your teaching or learning experiences about integrated STEM education. e)    Presenting your own insights into STEM education from a curriculum perspective as supported by your experiences, drawing on your conclusion in (a-d), culminating in your own emergent philosophy of STEM education. Individual Essay (2500 words) Write an individual essay on “ Review and application - Integrated STEM Education”. The essay is expected to comprise: a)    A literature review on the theoretical considerations in promoting integrated STEM education 20% b)    Compare and Contrast the STEM curriculum from three different places in a (table form). Base on your comparison, suggest the one that is highly suitable to be applied in your school / place on how they are aligning to the goals for STEM education 20% c)    Design of a school-based STEM curriculum  20% •   Name of the Intervention (similar theme but different issues) •   Principle •   Expected Learning Outcomes with justifications – different dimensions of learning outcomes for integrated STEM education •  Activity Plan : Contents + schedules + flow of events •   Flowchart •  Assessment tools d)    Presenting your own insights into STEM education from a curriculum perspective as supported by your experiences, drawing on your conclusion in (a-d), culminating in your  own emergent philosophy of STEM education. 10% e)    Organization 10% f)     Presentation 10% g)    References 10%

$25.00 View

[SOLVED] Problem set 3 soc-ga 2332 intro to stats

Recall that in our first lecture on regression, we talked about the Gauss-Markov Assumptions. If all these assumptions are met, the OLS estimator is the Best Linear Unbiased Estimator (BLUE). In a simple bivariate case, if the “true” data-generating process is Y = β0 + β1X + .The Gauss-Markov Assumptions can be state as the following: (a) Linearity: A linear relationship between X and Y hold in the sample. (b) Exogeneity of Predictors: The conditional mean of the error term, given the predictor, is zero (x = [x1, x2, …, xn] > is the value vector of X): E[i |x] = 0, for all i = 1, 2, …, n.(c) No Perfect Collinearity: Explanatory variables cannot be perfectly correlated. (d) Homoskedasticity: • No Heteroskedasticity: The conditional variance of the error term, given the predictor, is constant: V ar[i |x] = σ 2 , for all i = 1, 2, …, n.• No Autocorrelation: Conditional on the predictor, the error terms are uncorrelated across the observations: Cov[i , j |x] = 0 , i 6= j.1. [15pts] For each of the assumptions, discuss what will go wrong when the assumption is violated. Be brief in your answers. Note: In addition to class materials, you can learn more about these assumptions in the Wikipedia article on the Gauss–Markov theorem, particularly the “Gauss–Markov theorem as stated in econometrics” section. (You can skip all the mathematical proofs and remarks.) [Your Answer Here]2. [5pts] Let β0 = −0.25, β1 = 1.2, X ∼ Γ(5, 4), and  ∼ Normal(0, 1). Here, Γ(α, ψ) denotes the Gamma distribution with shape parameter α and rate parameter ψ. (You can search how to use R to simulate from this distribution.)Simulate a dataset of size n = 3, 000 from this process in which all of the assumptions you’ve discussed above hold. Estimate a OLS model and plot regression diagnostics of this model. # Your code hereBonus Question [10pts]: From assumption (a), (b) and (d), choose one assumption and simulate a data that violates that assumption (all other assumptions should be satisfied). Create a plot which illustrates how the violation of the assumption affects the regression results. This can be a scatterplot with both the “true” and “false” OLS lines, a sampling distribution of the OLS estimator (comparing your estimate model results with actual simulations), or anything that shows how the violation leads us to false decisions if we assume the assumption is true. (The point is to demonstrate a contrast between the “true” and the “false”, not just diagnostics of the “false”.)When simulating data, you don’t have to use the parameters set in the previous problem. Hint: You can search how to use + stat_function() to plot a nonlinear line when plotting with ggplot(), or search how to use the base R functions such as plot() and curve(). # Your code hereA study on COVID-19 constructed a “COVID risk factor” score based on the COVID infection rate of a given area (defined by zip code). A researcher wants to estimate the effect of having a vaccination center in the area on that area’s COVID risk factor score. She compiled a dataset that contains each area’s COVID risk factor score and whether the area has a vaccination center. She then estimated the effect of having a vaccination center using the “naive estimator” we discussed in class.You noted that the quality of information residents have about COVID and the vaccine can be a confounding variable that affects both the area’s infection rate and whether there is a vaccination center in the area.Assume that you are able to estimate the relationships this “informedness” confounder (info) and the original “vaccination center” predictor (vaccine) have with the COVID risk factor score (covid_risk), which can be simulated using the following code (n is sample size): set.seed(1234) # set the same seed to ensure identical results e = rnorm(n, 0, 0.5) covid_risk = rescale( 0 – 7*vaccine – 2*info + e, to = c(0, 100))1. [5pts] Import the data covid.csv, according to the counterfactual framework, constructing a counterfactual “risk factor” in the dataframe. # Your code here2. [10pts] Fill out the table below (round to 1 decimal points): Group Y T Y C Treatment Group (D = 1) E[Y T |D = 1] =? E[Y C |D = 1] =? Control Group (D = 0) E[Y T |D = 0] =? E[Y C |D = 0] =? 3. [15pts]Estimate the following: (a) The Naive Estimator of ATE (b) Treatment Effect on the Treated (c) Treatment Effect on the Control (d) Selection Bias4. [15pts] Write a non-technical, short summary reporting your results in response to the above mentioned researcher who used the naive estimation. Imagine that you are explaining this to an audience who may not be familiar with the specific terminologies of the counterfactual framework (such as ATE or Treatment Effect on the Treated), but is interested in your substantive findings. [Your Answer Here]admin.csv contains a dataset of graduate school admission results with the following variables: Variable Name Variable Detail admit Admission Dummy (Admitted is 1) gre GRE score gpa GPA rank Institution Tier (Tier 1 to 4)1. [10pts] Import admin.csv to your R environment. Estimate (a) a linear probability model and (b) a logistic regression model to predict the probability of being admitted based on the applicant’s GRE, GPA, and institution tier. Display the two modeling results in a table. # Your code here2. [10pts] In one or two paragraphs, summarize your modeling result for each model. [Your Answer Here]3. [15pts] Plot the predicted probability of admission based on one’s GPA percentile and institution rank (holding GRE at the mean) for the logistic regression model. For the purpose of this exercise, please set the value of gpa to range from 1 to 4. Make sure to add appropriate title and labels to your figure.# Your code herePart 4 (Not Graded) Final Replication Project At this point, you should complete most of the data cleaning and start replicating the descriptive tables and figure. You can submit an additional PDF file if you have made progress in replication Table A1a, Table A1b, and Figure 1.

$25.00 View