99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代寫COMP90049、代做Java/python程序設計
代寫COMP90049、代做Java/python程序設計

時間:2024-09-29  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



Released: Due:
Marks:
1 Overview
Assignment 2: Predicting Supreme Court Rulings Friday, September 6th 2024.
Stage I: Friday, October 4th 5pm Stage II: Wednesday, October 9th 5pm
The Project will be marked out of 30, and will contribute 30% of your total mark.
School of Computing and Information Systems
The University of Melbourne
COMP**049, Introduction to Machine Learning, Semester 2 2024
In this assignment, you will work with a dataset containing legal cases which were heard in the Supreme Court of the United States. If someone disagrees with the ruling of a lower court, they can appeal it by petitioning to the Supreme Court. In the Supreme Court, the case will be heard and discussed by six to nine judges who can either reverse the previous decision (successful appeal), or affirm (keep) the previous decision (unsuccessful appeal).
The assignment is divided into two stages: In Stage 1, you will develop a set of models to address key research questions and summarise your findings in a research paper-style report. You will also participate in a Kaggle in-class competition. In Stage 2, you will review two anonymous submissions from your peers, giving you the opportunity to reflect on different approaches and provide feedback. Throughout the project, you are expected to demonstrate your understanding of machine learning techniques and clearly communicate your knowledge in a report. You are expected to read this specification in full before commencing the project.
1.1 Data and features
You will develop and critically analyse models for predicting the Supreme Court decision (reverse or affirm) based on the texts of the oral discussions, as well as metadata about the case and the involved judges. That is, given a case, your model(s) will predict a binary label. You will be provided with a data set of over 5,000 past Supreme Court rulings and their decisions. Each case is represented by different classes of features (Table 1.1).
Features x1–x13 contain information that is available before any decision is made, i.e., that could be used in a realistic usage situation of the resulting model. When developing your classifiers initially, you should only use (a subset of) these features. (Section 1.2, RQ1)
Features x14–x15 provide information that becomes available only after or at decision time. Using this information would be considered cheating. You may use these features to analyse the performance of your classifiers, or biases in the data set. (Section 1.2, RQ2)

Features x16–x17 provide personal information about the judges deciding the case. While this informa- tion is publicly available, it should not influence the decision of the court (or improve you classifier). You may – but do not have to – use these features to explore your classifier performance in-depth. (Section 1.2, RQ3)
1.2 Description of tasks
Your overall task is to predict the Supreme Court decision (reverse or affirm) based on the texts of the oral discussions, as well as some metadata about the case. You are strongly encouraged to make use of machine learning software and/or existing libraries in your attempts at this project (such as sklearn or scipy). You are expected to address the following research questions (RQ):
RQ1 Compare at least ONE baseline (majority, random) and TWO machine learning models in terms of their performance, using (any subset of) features x1–x13. In addition to reporting and comparing the results, you are expected to perform and document steps that ensure the quality of your experiments, such as analyse the data distribution, perform hyper-parameter search, if applicable, and examine if model train- ing leads to overfitting. You are also required to explain the strengths and weaknesses, advantages and disadvantages of the different approaches you tried.
RQ2 Explore whether features x14 and/or x15 affect the performance of the models. For example, you can explore if using features that reflect the difficulty of the case (length of its hearing, vote unanimity etc) can improve the prediction.
RQ3 You must address one additional research question yourself, choosing appropriate models, data splits and evaluation methods. We provide two possible RQs for your inspiration, but you are free to choose a different one.
RQ3a: How does Issue Area impact the Supreme Court case rulings? Most cases in the given data set are assigned to one out of 14 issue areas (Feature x8), with a few cases having an UNKNOWN area. Explore the extent to which your model learns features that generalizes across issue areas. You will want to compare models that share features across areas against models that do not do this. You may want to experiment with predicting missing values for cases with an UNKNOWN issue area, and assess the impact on performance.
RQ3b: Exploring Bias in Supreme Court Predictions Personal attributes of Supreme court judges should not be predictive of the final decision. Explore how features x16-x17 impact your model perfor- mance. For example, you can explore if the number of voting judges, or the political orientation of the judges impacts the decision. Or, explore whether courts under different ‘chief justices’ (x16) exhibited different ruling patterns.
The goal of this assignment is to critically assess the effectiveness of various Machine Learning algorithms on the problem of determining the Supreme Court decision, and to express the knowledge that you have gained in a technical report. The technical side of this project will involve applying appropriate machine learning algorithms to the data to solve the task. There will be a Kaggle in-class competition where you can compare the performance of your algorithms against your classmates. Note that we expect it to be difficult to achieve substantial performance improvements on this task; among the standard models we tried the best one achieved a gain of only 0.03 in accuracy against the majority baseline. Thus, the goal of the project is to thoughtfully compare and explain models and features rather than achieve major performance gains.

  Name Description
ID
x1
x2
x3
x4
x5
x6
x7
x8
x9 x10 x11
x12 x13
x14
x16
 ’title’ the name of the case
  ’court hearing’ the text of the Court’s discussion, with utterances separated by five vertical bars (|||||)
  ’petitioner’ the party who appealed the case decision to the Supreme Court
 ’respondent’ the respondent to that case
 ’petitioner state’ the state where the petitioner is located (not all cases have this value)
 ’respondent state’ the state where the respondent is located (not all cases have this value)
 ’petitioner category’ the category to which the petitioner belongs (state, business, organization, ...)
 ’respondent category’ the category to which the respondent belongs (state, business, organization, ...)
 ’issue area’ encodes the main area of the law applicable to the case, such as Criminal Procedure, Civil Rights, Privacy, etc
 ’year’ the year when the case was filed
 ’argument date’ the day, month, and year that the case was orally argued before the Court
 ’court hearing - length of the Court discussion regarding the case (in minutes) length’
 ’utterances number’ number of utterances, or turns (i.e. when speakers switch during the con- versation) in the Supreme Court discussion
 ’decision date’ the day, month, and year of the decision
 ’chief justice’ the chief judge of the court when the current case was decided. One of [Burger, Rehnquist, Roberts, Warren]
  Y ’successful appeal’ 0 or 1, shows if the original decision was reversed (1, which means the appeal to the Supreme Court was successful) or affirmed (0, which means that the appeal was unsuccessful and the previous ruling was kept). This is
the label we are trying to predict.
  x15 ’majority ratio’ the ratio of judges who voted with the majority vs judges who voted with the minority. Please note that this ratio is about the vote distribution (how unanimous the voting was), and does not reflect the winning side, i.e. the
majority of judges can vote to reverse the decision or to keep it.
  x17 ’justices’ list of the judges who took part in the voting. Each of the judges is rep- resented as a dictionary with the following fields: ’name’: the judge’s ID, ’born year’: the year of their birth, ’gender’: their gender (male, female), ’political direction’: the political direction that best describes the judge’s
voting behavior (Liberal or Conservative).
Table 1: Input features (x1–x13), information about the decision (x14–x15), sensitive features (x16–x17), and label (Y) of the Supreme Court ruling dataset.
The focus of the project will be the report, formatted as a short research paper. In the report, you will demon- strate the knowledge that you have gained, in a manner that is accessible to a reasonably informed reader.

2 Deliverables
Stage I: Model development and testing and report writing (due October 4th):
1. One or more programs, written in Python, including all the code necessary to reproduce the results in your report (including model implementation, label prediction, and evaluation). You should also include a README file that briefly details your implementation. Submitted through the LMS.
2. An anonymous written report, of 2000 words (±10%) excluding reference list. Your name and student ID should not appear anywhere in the report, including the metadata (filename, etc.). Submitted through the LMS/Turnitin. You must upload the report as a separate PDF file. Do NOT upload it as part of a compressed archive (zip, tar, . . . ) file or in a different format.
3. Predictions for the test set of court rulings submitted to the Kaggle1 in-class competition described in Sec 7.
Stage II: Peer reviews (due October 9th):
1. Reviews of two reports written by your classmates, of 150-300 words each. Submitted through LMS.
3 Data Sets
You will be provided with
3.1
• • •
A training set of 4,612 supreme court rulings, with features x1–x17 (Table 1.1) and labelled with the court decision (reversed or affirmed)
A development (validation) set of 577 labeled court rulings, with features x1–x17 (Table 1.1) and labels which you may use for model selection and tuning;
A test set of 577 court rulings, with features x1–x17 (Table 1.1) but with no target labels. This data set will be used for final evaluation in the Kaggle in-class competition.
Conversation Embeddings
To aid in your initial experiments, we have provided two different representations of the full court conversations (feature x13). You may use any of these representations in your experiments, and you may also engineer your own features from the raw conversations if you wish. The provided representations are:
I. Raw The raw court discussion in plain text is provided in the field court_hearing in the raw data *.jsonl files. You may use this field to engineer your own representation of the discussion, for example, use TFIDF vectorisation over the whole text or some particular segments of it.
II. Embedding We mapped each court discussion to a 384-dimensional embedding computed with a pre- trained language model, called the Sentence Transformer (Reimers and Gurevych, 2019).2 These vectors cap- ture the “meaning” of each court discussion so that similar discussions will be located closely together in the 384-dimensional space. E.g.,
1 https://www.kaggle.com/
2 https://pypi.org/project/sentence-transformers/
 
[2.05549970e-02, 8.67250003e-02, 8.83460036e-02, -1.26217505e-01, 1.31394998e-02, . . .] a 384-dimensional list of numbers
Data format The main data files containing features x1–x17 are provided in JSON lines format (train.jsonl, dev.jsonl and test.jsonl). The labeled data sets also contain the target label (Y).
The Sentence embedding representations are provided as dense NumPy matrix (files ending in *.npy).3. Line numbers for the same data set type refer to the same instance, e.g., line 5 in train.jsonl and sembed\train.npy are different representations of the same court conversation.
4 Project Stage I
This is the main part of the project, where you are expected to address research questions as explained below, and summarise your finding in a research paper-style report.
4.1 Research Question
You should address three research questions in your project, as described in Section 1.2. RQ1 and RQ2 must be approached. For RQ3, you may either choose one of the questions we proposed for inspiration, or propose your own. Your report should clearly state which RQ3 you are addressing. Addressing more than one RQ3 does not lead to higher marks. We are more interested in your critical analysis of methods and results, than the coverage of more content or materials. However, for RQ1 you should minimally implement and analyse in your report one baseline, and at least two different machine learning models. N.B. We are more interested in your critical analysis of methods and results, than the raw performance of your models. You may not be able to arrive at a definitive answer to your research questions, which is perfectly fine. However, you should analyse and discuss your (possibly negative) results in depth.
4.2 Feature Engineering (optional)
We have discussed three types of attributes in this subject: categorical, ordinal, and numerical. All three types can be constructed for the given data. Some machine learning architectures prefer numerical attributes (e.g. k- NN); some work better with categorical attributes (e.g. multivariate Naive Bayes) – you will probably observe this through your experiments.
It is optional for you to engineer some attributes based on the raw court discussions (and possibly use them instead of – or along with – the feature representation provided by us). Or, you may simply use the text features (sentence embeddings) that we generated for you. In addition to (or instead of) text features, you may select to use any combination of categorical, ordinal, and numerical features from attributes x1–x12.
3Learn here how to read and process these files: https://numpy.org/doc/stable/reference/generated/ numpy.load.html.
 
4.3 Evaluation
The objective of your learners will be to predict the labels of unseen data. We will use a holdout strategy. The data collection has been split into three parts: a training set, a development (validation) set, and a test set. This data is available on the LMS.
To give you the possibility of evaluating your models on the test set, we will be setting up a Kaggle In- Class competition. You can submit results on the test set there, and get immediate feedback on your system’s performance. There is a Leaderboard, that will allow you to see how well you are doing as compared to other classmates participating on-line.
4.4 Report
You will submit an anonymised report of 2000 words in length (±10%), excluding reference list. The report should follow the structure of a short research paper, as discussed in the guest lecture on Academic Writing. It should describe your approach and observations in the context of your chosen research question, both in engineering (optional) features, and the machine learning algorithms you tried. Its main aim is to provide the reader with knowledge about the problem, in particular, critical analysis of your results and discoveries. The internal structure of well-known machine learning models should only be discussed if it is important for connecting the theory to your practical observations.
• Introduction: a short description of the problem and data set, and the research question addressed
• Literature review: a short summary of some related literature, including the data set reference and at least two additional relevant research papers of your choice. You might find inspiration in the Reference list of this document. You are encouraged to search for other references, for example among the articles cited
within the papers referenced in this document.
• Method: Identify the newly engineered feature(s), and the rationale behind including them (Optional).
Explain the ML models and evaluation metric(s) you have used (and why you have used them)
• Results: Present the results, in terms of evaluation metric(s) and, ideally, illustrative examples. Use of
tables and diagrams is highly recommended.
• Discussion / Critical Analysis: Contextualise∗∗ the system’s behavior, based on the understanding from
the subject materials as well as in the context of the research question.
• Conclusion: Clearly demonstrate your identified knowledge about the problem
• A bibliography, which includes Fang et al. (2023b), as well as references to any other related work you
used in your project. You are encouraged to use the APA 7 citation style, but may use different styles as long as you are consistent throughout your report.
∗∗ Contextualise implies that we are more interested in seeing evidence of you having thought about the task, and determined reasons for the relative performance of different methods, rather than the raw scores of the different methods you select. This is not to say that you should ignore the relative performance of different runs over the data, but rather that you should think beyond simple numbers to the reasons that underlie them.
We will provide LATEXand RTF style files that we would prefer that you use in writing the report. Reports are to be submitted in the form of a single PDF file. If a report is submitted in any format other than PDF, we reserve the right to return the report with a mark of 0.
Your name and student ID should not appear anywhere in the report, including any metadata (filename, etc.). If we find any such information, we reserve the right to return the report with a mark of 0.
 
5 Project Stage II
During the reviewing process, you will read two anonymous submissions by your classmates. This is to help you contemplate some other ways of approaching the Project, and to ensure that every student receives some extra feedback. You should aim to write 150-300 words total per review, responding to three ’questions’:
• Briefly summarise what the author has done in one paragraph (50-100 words)
• Indicate what you think that the author has done well, and why in one paragraph (50-100 words) • Indicate what you think could have been improved, and why in one paragraph (50-100 words)
6 Assessment Criteria
The Project will be marked out of 30, and is worth 30% of your overall mark for the subject. The mark break- down will be:
Report Quality: (26/30 marks)
You can consult the marking rubric on the LMS/Assignment 2 page which indicates in detailed categories what we will be looking for in the report.
Kaggle: (2/30 marks)
For submitting (at least) one set of model predictions to the Kaggle competition. Your marks will not depend on your results in this competition.
Reviews: (2/30 marks)
You will write a review for each of the two reports written by other students; you will follow the guidelines stated above.
7 Using Kaggle
Task The Kaggle competition will be on predicting the results of the Supreme Court hearing: reversed or
affirmed.
Instructions The Kaggle in-class competition URL will be announced on the LMS shortly. To participate do
the following:
• Each student should create a Kaggle account (unless they have one already) using your Student-ID.
• You may make up to 8 submissions per day. An example submission file can be found on the Kaggle site.
• Submissions will be evaluated by Kaggle for accuracy, against just 30% of the test data, forming the
public leaderboard.
• Prior to the closing of the competition, you may select a final submission out of the ones submitted
previously – by default the submission with the highest public leaderboard score is selected by Kaggle.

• After the competition closes, public 30% test scores will be replaced with the private leaderboard 100% test scores.
8 Assignment Policies
8.1 Terms of Data Use
The data set is derived from the resource published in Fang et al. (2023b):
Biaoyan Fang, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2023. Super-SCOTUS: A multi-sourced dataset for the Supreme Court of the US. In Proceedings of the Natural Legal Lan- guage Processing Workshop 2023, pages 202–214, Singapore. Association for Computational Linguistics.
This reference must be cited in the bibliography. We reserve the right to mark any submission lacking this reference with a 0, due to violation of the Terms of Use. We include other related references in the References section, in the end of this document.
Changes/Updates to the Project Specifications
We will use LMS announcements for any large-scale changes (hopefully none!) and Ed for small clarifications. Any addendums made to the Project specifications via LMS will supersede information contained in this version of the specifications.
Late Submission Policy
We allow no extensions or late submissions to ensure a smooth peer review process. Submission will close at 5pm on October 4th. Students who are eligible for special consideration (e.g., with an APP) please email Lea Frermann (lea.frermann@unimelb.edu.au>) and a solution will be accommodated.
Academic Misconduct
For most students, discussing ideas with peers will form a natural part of the undertaking of this project. How- ever, it is still an individual task, and so reuse of ideas or excessive influence in algorithm choice and develop- ment will be considered cheating. We highly recommend to (re)take the academic honesty training module in this subject’s LMS. We will be checking submissions for originality and will invoke the University’s Academic Misconduct policy4 where inappropriate levels of collusion or plagiarism are deemed to have taken place. Con- tent produced by generative AI (including, but not limited to, ChatGPT) is not your own work, and submitting such content will be treated as a case of academic misconduct, in line with the University’s policy.
4 http://academichonesty.unimelb.edu.au/policy.html
 
References
Fang, B., Cohn, T., Baldwin, T., and Frermann, L. (2023a). More than votes? voting and language based partisanship in the US Supreme Court. In Bouamor, H., Pino, J., and Bali, K., editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4604–4614, Singapore. Association for Computational Linguistics.
Fang, B., Cohn, T., Baldwin, T., and Frermann, L. (2023b). Super-SCOTUS: A multi-sourced dataset for the Supreme Court of the US. In Preot,iuc-Pietro, D., Goanta, C., Chalkidis, I., Barrett, L., Spanakis, G., and Aletras, N., editors, Proceedings of the Natural Legal Language Processing Workshop 2023, pages 202–214, Singapore. Association for Computational Linguistics.
Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Schu ̈tze, H., Manning, C. D., and Raghavan, P. (2008). Introduction to information retrieval, volume 39. Cambridge University Press Cambridge.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機打開當前頁
  • 上一篇:莆田鞋十大良心微商推薦,知名的十個莆田商家合集
  • 下一篇:代做AM05 、SQL編程語言代寫
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                亚洲日本va在线观看| 亚洲午夜免费福利视频| 夜夜操天天操亚洲| 色88888久久久久久影院野外| 亚洲丝袜精品丝袜在线| 91国偷自产一区二区三区观看| 亚洲成人免费观看| 精品福利一二区| 精东粉嫩av免费一区二区三区| 欧美剧在线免费观看网站| 国产一区在线不卡| 亚洲综合无码一区二区| 欧美mv和日韩mv国产网站| 99精品1区2区| 精品一区二区三区香蕉蜜桃| 最新热久久免费视频| 日韩欧美国产一二三区| 一本到一区二区三区| 国产精品夜夜爽| 日韩电影免费一区| 亚洲色图在线视频| 国产亚洲va综合人人澡精品| 欧美三级韩国三级日本三斤| 国产成人精品免费一区二区| 日韩高清中文字幕一区| 国产精品第五页| 久久免费视频色| 欧美日韩大陆在线| www..com久久爱| 国产在线不卡一区| 蜜臀av性久久久久蜜臀aⅴ四虎 | 99精品视频在线免费观看| 日韩av电影免费观看高清完整版| 亚洲欧美日韩电影| 日本一区二区三区在线不卡| 日韩精品在线网站| 7777精品伊人久久久大香线蕉| 91欧美一区二区| 91在线免费看| 91麻豆免费看| 99麻豆久久久国产精品免费优播| 高清不卡在线观看| 国产成人aaaa| 成人免费va视频| 成人午夜电影网站| 顶级嫩模精品视频在线看| 精品一区二区三区免费| 久久国内精品自在自线400部| 亚洲高清不卡在线| 日日夜夜免费精品| 美女视频网站久久| 国产另类ts人妖一区二区| 韩国v欧美v日本v亚洲v| 国产精品一卡二| 成人精品免费看| 99热在这里有精品免费| 欧美亚洲另类激情小说| 欧美三级视频在线观看| 91麻豆精品国产自产在线观看一区 | 亚洲成av人片在www色猫咪| 亚洲免费色视频| 亚洲欧洲国产专区| 日韩一区日韩二区| 一区二区国产视频| 日韩精品欧美精品| 狠狠狠色丁香婷婷综合激情| 国产精品66部| 色综合av在线| 欧美一区二区三区视频在线观看| 欧美日韩国产小视频在线观看| 欧美美女一区二区三区| 精品国产一二三| 亚洲视频在线观看一区| 天天综合网 天天综合色| 精品一区二区三区欧美| av一区二区三区四区| 91国模大尺度私拍在线视频| 日韩一区二区三区免费观看| 久久久国产精品麻豆| 亚洲精品视频在线看| 麻豆91在线看| 欧美最猛性xxxxx直播| 亚洲精品在线网站| 亚洲国产精品尤物yw在线观看| 久久精品久久综合| 色婷婷av一区二区| 久久久久久久综合日本| 亚洲第一精品在线| 国产a精品视频| 欧美一级欧美三级| 国产精品毛片高清在线完整版| 午夜av一区二区| 99久久国产综合精品色伊| 精品少妇一区二区三区视频免付费 | 欧美电影免费观看高清完整版在 | 国产激情偷乱视频一区二区三区| 99久久精品费精品国产一区二区 | 欧美日韩国产在线播放网站| 国产欧美日韩三区| 免费在线观看一区二区三区| 92精品国产成人观看免费| 久久尤物电影视频在线观看| 天天操天天色综合| 色香蕉成人二区免费| 国产精品三级av| 国内欧美视频一区二区| 91精品国产综合久久蜜臀| 亚洲精品美腿丝袜| 国产不卡视频一区二区三区| 日韩精品影音先锋| 免费观看一级欧美片| 欧美日韩国产一级| 亚洲欧美色综合| jlzzjlzz亚洲日本少妇| 国产精品美女一区二区在线观看| 国产一区不卡在线| 欧美mv和日韩mv国产网站| 日韩中文字幕区一区有砖一区| 欧美性极品少妇| 亚洲r级在线视频| 欧美高清视频一二三区 | 在线电影欧美成精品| 亚洲香蕉伊在人在线观| 欧美曰成人黄网| 亚洲午夜精品网| 欧美天天综合网| 婷婷成人激情在线网| 91精品中文字幕一区二区三区 | 亚洲国产一区二区视频| 91免费精品国自产拍在线不卡| 一区在线中文字幕| 色欧美88888久久久久久影院| 亚洲男人的天堂一区二区| 欧洲一区在线观看| 日韩精品1区2区3区| 日韩三级电影网址| 国产高清不卡一区二区| 中文字幕视频一区二区三区久| 一本一道久久a久久精品综合蜜臀| 一区二区在线观看视频在线观看| 色婷婷久久综合| 日韩在线一二三区| 久久久精品综合| 91成人在线精品| 日本亚洲天堂网| 国产精品视频线看| 欧美三级中文字| 韩国精品在线观看| 亚洲免费av高清| 欧美大度的电影原声| 成人国产一区二区三区精品| 一二三区精品视频| 久久久精品中文字幕麻豆发布| 99精品国产视频| 人妖欧美一区二区| 中文字幕一区日韩精品欧美| 欧洲国内综合视频| 国产精品白丝jk黑袜喷水| 亚洲精品国产高清久久伦理二区| 91精品免费在线| 一本大道av伊人久久综合| 蜜臀av性久久久久蜜臀av麻豆| 中文字幕制服丝袜成人av| 91精品国产综合久久久久久| 成年人国产精品| 精品一区二区三区蜜桃| 亚洲电影一区二区三区| 久久精品欧美一区二区三区麻豆| 欧美视频一区二区三区在线观看| 成人中文字幕合集| 久久精品理论片| 日韩综合小视频| 亚洲在线中文字幕| 亚洲欧美日韩一区二区 | 69堂国产成人免费视频| 99视频热这里只有精品免费| 韩国毛片一区二区三区| 日韩中文字幕亚洲一区二区va在线 | 51精品久久久久久久蜜臀| 成人亚洲一区二区一| 免费看欧美美女黄的网站| 亚洲成人免费在线| 自拍偷拍亚洲欧美日韩| 欧美激情一区二区| 久久先锋影音av| 欧美不卡在线视频| 91精品国产欧美一区二区成人 | 91在线观看污| 国产91丝袜在线播放九色| 狠狠色伊人亚洲综合成人| 日韩经典中文字幕一区| 亚洲午夜精品网| 亚洲一区二区四区蜜桃| 一区二区三区产品免费精品久久75| 国产精品短视频| 中文字幕一区二区三| 中文字幕在线不卡视频| 国产精品久久久久影视| 国产亚洲短视频| 亚洲欧美中日韩|