Merge "Merge remote-tracking branch 'origin/tmp-84df952' into msm-kona"
This commit is contained in:
commit
687a4f36b4
143 changed files with 993 additions and 972 deletions
|
@ -1,4 +1,4 @@
|
|||
.. SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
.. SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
=============
|
||||
ID Allocation
|
||||
|
|
156
Documentation/process/code-of-conduct-interpretation.rst
Normal file
156
Documentation/process/code-of-conduct-interpretation.rst
Normal file
|
@ -0,0 +1,156 @@
|
|||
.. _code_of_conduct_interpretation:
|
||||
|
||||
Linux Kernel Contributor Covenant Code of Conduct Interpretation
|
||||
================================================================
|
||||
|
||||
The :ref:`code_of_conduct` is a general document meant to
|
||||
provide a set of rules for almost any open source community. Every
|
||||
open-source community is unique and the Linux kernel is no exception.
|
||||
Because of this, this document describes how we in the Linux kernel
|
||||
community will interpret it. We also do not expect this interpretation
|
||||
to be static over time, and will adjust it as needed.
|
||||
|
||||
The Linux kernel development effort is a very personal process compared
|
||||
to "traditional" ways of developing software. Your contributions and
|
||||
ideas behind them will be carefully reviewed, often resulting in
|
||||
critique and criticism. The review will almost always require
|
||||
improvements before the material can be included in the
|
||||
kernel. Know that this happens because everyone involved wants to see
|
||||
the best possible solution for the overall success of Linux. This
|
||||
development process has been proven to create the most robust operating
|
||||
system kernel ever, and we do not want to do anything to cause the
|
||||
quality of submission and eventual result to ever decrease.
|
||||
|
||||
Maintainers
|
||||
-----------
|
||||
|
||||
The Code of Conduct uses the term "maintainers" numerous times. In the
|
||||
kernel community, a "maintainer" is anyone who is responsible for a
|
||||
subsystem, driver, or file, and is listed in the MAINTAINERS file in the
|
||||
kernel source tree.
|
||||
|
||||
Responsibilities
|
||||
----------------
|
||||
|
||||
The Code of Conduct mentions rights and responsibilities for
|
||||
maintainers, and this needs some further clarifications.
|
||||
|
||||
First and foremost, it is a reasonable expectation to have maintainers
|
||||
lead by example.
|
||||
|
||||
That being said, our community is vast and broad, and there is no new
|
||||
requirement for maintainers to unilaterally handle how other people
|
||||
behave in the parts of the community where they are active. That
|
||||
responsibility is upon all of us, and ultimately the Code of Conduct
|
||||
documents final escalation paths in case of unresolved concerns
|
||||
regarding conduct issues.
|
||||
|
||||
Maintainers should be willing to help when problems occur, and work with
|
||||
others in the community when needed. Do not be afraid to reach out to
|
||||
the Technical Advisory Board (TAB) or other maintainers if you're
|
||||
uncertain how to handle situations that come up. It will not be
|
||||
considered a violation report unless you want it to be. If you are
|
||||
uncertain about approaching the TAB or any other maintainers, please
|
||||
reach out to our conflict mediator, Mishi Choudhary <mishi@linux.com>.
|
||||
|
||||
In the end, "be kind to each other" is really what the end goal is for
|
||||
everybody. We know everyone is human and we all fail at times, but the
|
||||
primary goal for all of us should be to work toward amicable resolutions
|
||||
of problems. Enforcement of the code of conduct will only be a last
|
||||
resort option.
|
||||
|
||||
Our goal of creating a robust and technically advanced operating system
|
||||
and the technical complexity involved naturally require expertise and
|
||||
decision-making.
|
||||
|
||||
The required expertise varies depending on the area of contribution. It
|
||||
is determined mainly by context and technical complexity and only
|
||||
secondary by the expectations of contributors and maintainers.
|
||||
|
||||
Both the expertise expectations and decision-making are subject to
|
||||
discussion, but at the very end there is a basic necessity to be able to
|
||||
make decisions in order to make progress. This prerogative is in the
|
||||
hands of maintainers and project's leadership and is expected to be used
|
||||
in good faith.
|
||||
|
||||
As a consequence, setting expertise expectations, making decisions and
|
||||
rejecting unsuitable contributions are not viewed as a violation of the
|
||||
Code of Conduct.
|
||||
|
||||
While maintainers are in general welcoming to newcomers, their capacity
|
||||
of helping contributors overcome the entry hurdles is limited, so they
|
||||
have to set priorities. This, also, is not to be seen as a violation of
|
||||
the Code of Conduct. The kernel community is aware of that and provides
|
||||
entry level programs in various forms like kernelnewbies.org.
|
||||
|
||||
Scope
|
||||
-----
|
||||
|
||||
The Linux kernel community primarily interacts on a set of public email
|
||||
lists distributed around a number of different servers controlled by a
|
||||
number of different companies or individuals. All of these lists are
|
||||
defined in the MAINTAINERS file in the kernel source tree. Any emails
|
||||
sent to those mailing lists are considered covered by the Code of
|
||||
Conduct.
|
||||
|
||||
Developers who use the kernel.org bugzilla, and other subsystem bugzilla
|
||||
or bug tracking tools should follow the guidelines of the Code of
|
||||
Conduct. The Linux kernel community does not have an "official" project
|
||||
email address, or "official" social media address. Any activity
|
||||
performed using a kernel.org email account must follow the Code of
|
||||
Conduct as published for kernel.org, just as any individual using a
|
||||
corporate email account must follow the specific rules of that
|
||||
corporation.
|
||||
|
||||
The Code of Conduct does not prohibit continuing to include names, email
|
||||
addresses, and associated comments in mailing list messages, kernel
|
||||
change log messages, or code comments.
|
||||
|
||||
Interaction in other forums is covered by whatever rules apply to said
|
||||
forums and is in general not covered by the Code of Conduct. Exceptions
|
||||
may be considered for extreme circumstances.
|
||||
|
||||
Contributions submitted for the kernel should use appropriate language.
|
||||
Content that already exists predating the Code of Conduct will not be
|
||||
addressed now as a violation. Inappropriate language can be seen as a
|
||||
bug, though; such bugs will be fixed more quickly if any interested
|
||||
parties submit patches to that effect. Expressions that are currently
|
||||
part of the user/kernel API, or reflect terminology used in published
|
||||
standards or specifications, are not considered bugs.
|
||||
|
||||
Enforcement
|
||||
-----------
|
||||
|
||||
The address listed in the Code of Conduct goes to the Code of Conduct
|
||||
Committee. The exact members receiving these emails at any given time
|
||||
are listed at https://kernel.org/code-of-conduct.html. Members can not
|
||||
access reports made before they joined or after they have left the
|
||||
committee.
|
||||
|
||||
The initial Code of Conduct Committee consists of volunteer members of
|
||||
the TAB, as well as a professional mediator acting as a neutral third
|
||||
party. The first task of the committee is to establish documented
|
||||
processes, which will be made public.
|
||||
|
||||
Any member of the committee, including the mediator, can be contacted
|
||||
directly if a reporter does not wish to include the full committee in a
|
||||
complaint or concern.
|
||||
|
||||
The Code of Conduct Committee reviews the cases according to the
|
||||
processes (see above) and consults with the TAB as needed and
|
||||
appropriate, for instance to request and receive information about the
|
||||
kernel community.
|
||||
|
||||
Any decisions by the committee will be brought to the TAB, for
|
||||
implementation of enforcement with the relevant maintainers if needed.
|
||||
A decision by the Code of Conduct Committee can be overturned by the TAB
|
||||
by a two-thirds vote.
|
||||
|
||||
At quarterly intervals, the Code of Conduct Committee and TAB will
|
||||
provide a report summarizing the anonymised reports that the Code of
|
||||
Conduct committee has received and their status, as well details of any
|
||||
overridden decisions including complete and identifiable voting details.
|
||||
|
||||
We expect to establish a different process for Code of Conduct Committee
|
||||
staffing beyond the bootstrap period. This document will be updated
|
||||
with that information when this occurs.
|
|
@ -1,3 +1,5 @@
|
|||
.. _code_of_conduct:
|
||||
|
||||
Contributor Covenant Code of Conduct
|
||||
++++++++++++++++++++++++++++++++++++
|
||||
|
||||
|
@ -63,19 +65,22 @@ Enforcement
|
|||
===========
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting the Technical Advisory Board (TAB) at
|
||||
<tab@lists.linux-foundation.org>. All complaints will be reviewed and
|
||||
investigated and will result in a response that is deemed necessary and
|
||||
appropriate to the circumstances. The TAB is obligated to maintain
|
||||
confidentiality with regard to the reporter of an incident. Further details of
|
||||
specific enforcement policies may be posted separately.
|
||||
|
||||
Maintainers who do not follow or enforce the Code of Conduct in good faith may
|
||||
face temporary or permanent repercussions as determined by other members of the
|
||||
project’s leadership.
|
||||
reported by contacting the Code of Conduct Committee at
|
||||
<conduct@kernel.org>. All complaints will be reviewed and investigated
|
||||
and will result in a response that is deemed necessary and appropriate
|
||||
to the circumstances. The Code of Conduct Committee is obligated to
|
||||
maintain confidentiality with regard to the reporter of an incident.
|
||||
Further details of specific enforcement policies may be posted
|
||||
separately.
|
||||
|
||||
Attribution
|
||||
===========
|
||||
|
||||
This Code of Conduct is adapted from the Contributor Covenant, version 1.4,
|
||||
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
|
||||
Interpretation
|
||||
==============
|
||||
|
||||
See the :ref:`code_of_conduct_interpretation` document for how the Linux
|
||||
kernel community will be interpreting this document.
|
||||
|
|
|
@ -21,6 +21,7 @@ Below are the essential guides that every developer should read.
|
|||
|
||||
howto
|
||||
code-of-conduct
|
||||
code-of-conduct-interpretation
|
||||
development-process
|
||||
submitting-patches
|
||||
coding-style
|
||||
|
|
|
@ -1,397 +0,0 @@
|
|||
Valid-License-Identifier: CC-BY-SA-4.0
|
||||
SPDX-URL: https://spdx.org/licenses/CC-BY-SA-4.0
|
||||
Usage-Guide:
|
||||
To use the Creative Commons Attribution Share Alike 4.0 International
|
||||
license put the following SPDX tag/value pair into a comment according to
|
||||
the placement guidelines in the licensing rules documentation:
|
||||
SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
License-Text:
|
||||
|
||||
Creative Commons Attribution-ShareAlike 4.0 International
|
||||
|
||||
Creative Commons Corporation ("Creative Commons") is not a law firm and
|
||||
does not provide legal services or legal advice. Distribution of Creative
|
||||
Commons public licenses does not create a lawyer-client or other
|
||||
relationship. Creative Commons makes its licenses and related information
|
||||
available on an "as-is" basis. Creative Commons gives no warranties
|
||||
regarding its licenses, any material licensed under their terms and
|
||||
conditions, or any related information. Creative Commons disclaims all
|
||||
liability for damages resulting from their use to the fullest extent
|
||||
possible.
|
||||
|
||||
Using Creative Commons Public Licenses
|
||||
|
||||
Creative Commons public licenses provide a standard set of terms and
|
||||
conditions that creators and other rights holders may use to share original
|
||||
works of authorship and other material subject to copyright and certain
|
||||
other rights specified in the public license below. The following
|
||||
considerations are for informational purposes only, are not exhaustive, and
|
||||
do not form part of our licenses.
|
||||
|
||||
Considerations for licensors: Our public licenses are intended for use by
|
||||
those authorized to give the public permission to use material in ways
|
||||
otherwise restricted by copyright and certain other rights. Our licenses
|
||||
are irrevocable. Licensors should read and understand the terms and
|
||||
conditions of the license they choose before applying it. Licensors should
|
||||
also secure all rights necessary before applying our licenses so that the
|
||||
public can reuse the material as expected. Licensors should clearly mark
|
||||
any material not subject to the license. This includes other CC-licensed
|
||||
material, or material used under an exception or limitation to
|
||||
copyright. More considerations for licensors :
|
||||
wiki.creativecommons.org/Considerations_for_licensors
|
||||
|
||||
Considerations for the public: By using one of our public licenses, a
|
||||
licensor grants the public permission to use the licensed material under
|
||||
specified terms and conditions. If the licensor's permission is not
|
||||
necessary for any reason - for example, because of any applicable exception
|
||||
or limitation to copyright - then that use is not regulated by the
|
||||
license. Our licenses grant only permissions under copyright and certain
|
||||
other rights that a licensor has authority to grant. Use of the licensed
|
||||
material may still be restricted for other reasons, including because
|
||||
others have copyright or other rights in the material. A licensor may make
|
||||
special requests, such as asking that all changes be marked or described.
|
||||
|
||||
Although not required by our licenses, you are encouraged to respect those
|
||||
requests where reasonable. More considerations for the public :
|
||||
wiki.creativecommons.org/Considerations_for_licensees
|
||||
|
||||
Creative Commons Attribution-ShareAlike 4.0 International Public License
|
||||
|
||||
By exercising the Licensed Rights (defined below), You accept and agree to
|
||||
be bound by the terms and conditions of this Creative Commons
|
||||
Attribution-ShareAlike 4.0 International Public License ("Public
|
||||
License"). To the extent this Public License may be interpreted as a
|
||||
contract, You are granted the Licensed Rights in consideration of Your
|
||||
acceptance of these terms and conditions, and the Licensor grants You such
|
||||
rights in consideration of benefits the Licensor receives from making the
|
||||
Licensed Material available under these terms and conditions.
|
||||
|
||||
Section 1 - Definitions.
|
||||
|
||||
a. Adapted Material means material subject to Copyright and Similar
|
||||
Rights that is derived from or based upon the Licensed Material and
|
||||
in which the Licensed Material is translated, altered, arranged,
|
||||
transformed, or otherwise modified in a manner requiring permission
|
||||
under the Copyright and Similar Rights held by the Licensor. For
|
||||
purposes of this Public License, where the Licensed Material is a
|
||||
musical work, performance, or sound recording, Adapted Material is
|
||||
always produced where the Licensed Material is synched in timed
|
||||
relation with a moving image.
|
||||
|
||||
b. Adapter's License means the license You apply to Your Copyright and
|
||||
Similar Rights in Your contributions to Adapted Material in
|
||||
accordance with the terms and conditions of this Public License.
|
||||
|
||||
c. BY-SA Compatible License means a license listed at
|
||||
creativecommons.org/compatiblelicenses, approved by Creative Commons
|
||||
as essentially the equivalent of this Public License.
|
||||
|
||||
d. Copyright and Similar Rights means copyright and/or similar rights
|
||||
closely related to copyright including, without limitation,
|
||||
performance, broadcast, sound recording, and Sui Generis Database
|
||||
Rights, without regard to how the rights are labeled or
|
||||
categorized. For purposes of this Public License, the rights
|
||||
specified in Section 2(b)(1)-(2) are not Copyright and Similar
|
||||
Rights.
|
||||
|
||||
e. Effective Technological Measures means those measures that, in the
|
||||
absence of proper authority, may not be circumvented under laws
|
||||
fulfilling obligations under Article 11 of the WIPO Copyright Treaty
|
||||
adopted on December 20, 1996, and/or similar international
|
||||
agreements.
|
||||
|
||||
f. Exceptions and Limitations means fair use, fair dealing, and/or any
|
||||
other exception or limitation to Copyright and Similar Rights that
|
||||
applies to Your use of the Licensed Material.
|
||||
|
||||
g. License Elements means the license attributes listed in the name of
|
||||
a Creative Commons Public License. The License Elements of this
|
||||
Public License are Attribution and ShareAlike.
|
||||
|
||||
h. Licensed Material means the artistic or literary work, database, or
|
||||
other material to which the Licensor applied this Public License.
|
||||
|
||||
i. Licensed Rights means the rights granted to You subject to the terms
|
||||
and conditions of this Public License, which are limited to all
|
||||
Copyright and Similar Rights that apply to Your use of the Licensed
|
||||
Material and that the Licensor has authority to license.
|
||||
|
||||
j. Licensor means the individual(s) or entity(ies) granting rights
|
||||
under this Public License.
|
||||
|
||||
k. Share means to provide material to the public by any means or
|
||||
process that requires permission under the Licensed Rights, such as
|
||||
reproduction, public display, public performance, distribution,
|
||||
dissemination, communication, or importation, and to make material
|
||||
available to the public including in ways that members of the public
|
||||
may access the material from a place and at a time individually
|
||||
chosen by them.
|
||||
|
||||
l. Sui Generis Database Rights means rights other than copyright
|
||||
resulting from Directive 96/9/EC of the European Parliament and of
|
||||
the Council of 11 March 1996 on the legal protection of databases,
|
||||
as amended and/or succeeded, as well as other essentially equivalent
|
||||
rights anywhere in the world. m. You means the individual or entity
|
||||
exercising the Licensed Rights under this Public License. Your has a
|
||||
corresponding meaning.
|
||||
|
||||
Section 2 - Scope.
|
||||
|
||||
a. License grant.
|
||||
|
||||
1. Subject to the terms and conditions of this Public License, the
|
||||
Licensor hereby grants You a worldwide, royalty-free,
|
||||
non-sublicensable, non-exclusive, irrevocable license to
|
||||
exercise the Licensed Rights in the Licensed Material to:
|
||||
|
||||
A. reproduce and Share the Licensed Material, in whole or in part; and
|
||||
|
||||
B. produce, reproduce, and Share Adapted Material.
|
||||
|
||||
2. Exceptions and Limitations. For the avoidance of doubt, where
|
||||
Exceptions and Limitations apply to Your use, this Public
|
||||
License does not apply, and You do not need to comply with its
|
||||
terms and conditions.
|
||||
|
||||
3. Term. The term of this Public License is specified in Section 6(a).
|
||||
|
||||
4. Media and formats; technical modifications allowed. The Licensor
|
||||
authorizes You to exercise the Licensed Rights in all media and
|
||||
formats whether now known or hereafter created, and to make
|
||||
technical modifications necessary to do so. The Licensor waives
|
||||
and/or agrees not to assert any right or authority to forbid You
|
||||
from making technical modifications necessary to exercise the
|
||||
Licensed Rights, including technical modifications necessary to
|
||||
circumvent Effective Technological Measures. For purposes of
|
||||
this Public License, simply making modifications authorized by
|
||||
this Section 2(a)(4) never produces Adapted Material.
|
||||
|
||||
5. Downstream recipients.
|
||||
|
||||
A. Offer from the Licensor - Licensed Material. Every recipient
|
||||
of the Licensed Material automatically receives an offer
|
||||
from the Licensor to exercise the Licensed Rights under the
|
||||
terms and conditions of this Public License.
|
||||
|
||||
B. Additional offer from the Licensor - Adapted Material. Every
|
||||
recipient of Adapted Material from You automatically
|
||||
receives an offer from the Licensor to exercise the Licensed
|
||||
Rights in the Adapted Material under the conditions of the
|
||||
Adapter's License You apply.
|
||||
|
||||
C. No downstream restrictions. You may not offer or impose any
|
||||
additional or different terms or conditions on, or apply any
|
||||
Effective Technological Measures to, the Licensed Material
|
||||
if doing so restricts exercise of the Licensed Rights by any
|
||||
recipient of the Licensed Material.
|
||||
|
||||
6. No endorsement. Nothing in this Public License constitutes or
|
||||
may be construed as permission to assert or imply that You are,
|
||||
or that Your use of the Licensed Material is, connected with, or
|
||||
sponsored, endorsed, or granted official status by, the Licensor
|
||||
or others designated to receive attribution as provided in
|
||||
Section 3(a)(1)(A)(i).
|
||||
|
||||
b. Other rights.
|
||||
|
||||
1. Moral rights, such as the right of integrity, are not licensed
|
||||
under this Public License, nor are publicity, privacy, and/or
|
||||
other similar personality rights; however, to the extent
|
||||
possible, the Licensor waives and/or agrees not to assert any
|
||||
such rights held by the Licensor to the limited extent necessary
|
||||
to allow You to exercise the Licensed Rights, but not otherwise.
|
||||
|
||||
2. Patent and trademark rights are not licensed under this Public
|
||||
License.
|
||||
|
||||
3. To the extent possible, the Licensor waives any right to collect
|
||||
royalties from You for the exercise of the Licensed Rights,
|
||||
whether directly or through a collecting society under any
|
||||
voluntary or waivable statutory or compulsory licensing
|
||||
scheme. In all other cases the Licensor expressly reserves any
|
||||
right to collect such royalties.
|
||||
|
||||
Section 3 - License Conditions.
|
||||
|
||||
Your exercise of the Licensed Rights is expressly made subject to the
|
||||
following conditions.
|
||||
|
||||
a. Attribution.
|
||||
|
||||
1. If You Share the Licensed Material (including in modified form),
|
||||
You must:
|
||||
|
||||
A. retain the following if it is supplied by the Licensor with
|
||||
the Licensed Material:
|
||||
|
||||
i. identification of the creator(s) of the Licensed
|
||||
Material and any others designated to receive
|
||||
attribution, in any reasonable manner requested by the
|
||||
Licensor (including by pseudonym if designated);
|
||||
|
||||
ii. a copyright notice;
|
||||
|
||||
iii. a notice that refers to this Public License;
|
||||
|
||||
iv. a notice that refers to the disclaimer of warranties;
|
||||
|
||||
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
|
||||
|
||||
B. indicate if You modified the Licensed Material and retain an
|
||||
indication of any previous modifications; and
|
||||
|
||||
C. indicate the Licensed Material is licensed under this Public
|
||||
License, and include the text of, or the URI or hyperlink to,
|
||||
this Public License.
|
||||
|
||||
2. You may satisfy the conditions in Section 3(a)(1) in any
|
||||
reasonable manner based on the medium, means, and context in
|
||||
which You Share the Licensed Material. For example, it may be
|
||||
reasonable to satisfy the conditions by providing a URI or
|
||||
hyperlink to a resource that includes the required information.
|
||||
|
||||
3. If requested by the Licensor, You must remove any of the
|
||||
information required by Section 3(a)(1)(A) to the extent
|
||||
reasonably practicable. b. ShareAlike.In addition to the
|
||||
conditions in Section 3(a), if You Share Adapted Material You
|
||||
produce, the following conditions also apply.
|
||||
|
||||
1. The Adapter's License You apply must be a Creative Commons
|
||||
license with the same License Elements, this version or
|
||||
later, or a BY-SA Compatible License.
|
||||
|
||||
2. You must include the text of, or the URI or hyperlink to, the
|
||||
Adapter's License You apply. You may satisfy this condition
|
||||
in any reasonable manner based on the medium, means, and
|
||||
context in which You Share Adapted Material.
|
||||
|
||||
3. You may not offer or impose any additional or different terms
|
||||
or conditions on, or apply any Effective Technological
|
||||
Measures to, Adapted Material that restrict exercise of the
|
||||
rights granted under the Adapter's License You apply.
|
||||
|
||||
Section 4 - Sui Generis Database Rights.
|
||||
|
||||
Where the Licensed Rights include Sui Generis Database Rights that apply to
|
||||
Your use of the Licensed Material:
|
||||
|
||||
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to
|
||||
extract, reuse, reproduce, and Share all or a substantial portion of
|
||||
the contents of the database;
|
||||
|
||||
b. if You include all or a substantial portion of the database contents
|
||||
in a database in which You have Sui Generis Database Rights, then
|
||||
the database in which You have Sui Generis Database Rights (but not
|
||||
its individual contents) is Adapted Material, including for purposes
|
||||
of Section 3(b); and
|
||||
|
||||
c. You must comply with the conditions in Section 3(a) if You Share all
|
||||
or a substantial portion of the contents of the database.
|
||||
|
||||
For the avoidance of doubt, this Section 4 supplements and does not
|
||||
replace Your obligations under this Public License where the Licensed
|
||||
Rights include other Copyright and Similar Rights.
|
||||
|
||||
Section 5 - Disclaimer of Warranties and Limitation of Liability.
|
||||
|
||||
a. Unless otherwise separately undertaken by the Licensor, to the
|
||||
extent possible, the Licensor offers the Licensed Material as-is and
|
||||
as-available, and makes no representations or warranties of any kind
|
||||
concerning the Licensed Material, whether express, implied,
|
||||
statutory, or other. This includes, without limitation, warranties
|
||||
of title, merchantability, fitness for a particular purpose,
|
||||
non-infringement, absence of latent or other defects, accuracy, or
|
||||
the presence or absence of errors, whether or not known or
|
||||
discoverable. Where disclaimers of warranties are not allowed in
|
||||
full or in part, this disclaimer may not apply to You.
|
||||
|
||||
b. To the extent possible, in no event will the Licensor be liable to
|
||||
You on any legal theory (including, without limitation, negligence)
|
||||
or otherwise for any direct, special, indirect, incidental,
|
||||
consequential, punitive, exemplary, or other losses, costs,
|
||||
expenses, or damages arising out of this Public License or use of
|
||||
the Licensed Material, even if the Licensor has been advised of the
|
||||
possibility of such losses, costs, expenses, or damages. Where a
|
||||
limitation of liability is not allowed in full or in part, this
|
||||
limitation may not apply to You.
|
||||
|
||||
c. The disclaimer of warranties and limitation of liability provided
|
||||
above shall be interpreted in a manner that, to the extent possible,
|
||||
most closely approximates an absolute disclaimer and waiver of all
|
||||
liability.
|
||||
|
||||
Section 6 - Term and Termination.
|
||||
|
||||
a. This Public License applies for the term of the Copyright and
|
||||
Similar Rights licensed here. However, if You fail to comply with
|
||||
this Public License, then Your rights under this Public License
|
||||
terminate automatically.
|
||||
|
||||
b. Where Your right to use the Licensed Material has terminated under
|
||||
Section 6(a), it reinstates:
|
||||
|
||||
1. automatically as of the date the violation is cured, provided it
|
||||
is cured within 30 days of Your discovery of the violation; or
|
||||
|
||||
2. upon express reinstatement by the Licensor.
|
||||
|
||||
c. For the avoidance of doubt, this Section 6(b) does not affect any
|
||||
right the Licensor may have to seek remedies for Your violations of
|
||||
this Public License.
|
||||
|
||||
d. For the avoidance of doubt, the Licensor may also offer the Licensed
|
||||
Material under separate terms or conditions or stop distributing the
|
||||
Licensed Material at any time; however, doing so will not terminate
|
||||
this Public License.
|
||||
|
||||
e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
|
||||
|
||||
Section 7 - Other Terms and Conditions.
|
||||
|
||||
a. The Licensor shall not be bound by any additional or different terms
|
||||
or conditions communicated by You unless expressly agreed.
|
||||
|
||||
b. Any arrangements, understandings, or agreements regarding the
|
||||
Licensed Material not stated herein are separate from and
|
||||
independent of the terms and conditions of this Public License.
|
||||
|
||||
Section 8 - Interpretation.
|
||||
|
||||
a. For the avoidance of doubt, this Public License does not, and shall
|
||||
not be interpreted to, reduce, limit, restrict, or impose conditions
|
||||
on any use of the Licensed Material that could lawfully be made
|
||||
without permission under this Public License.
|
||||
|
||||
b. To the extent possible, if any provision of this Public License is
|
||||
deemed unenforceable, it shall be automatically reformed to the
|
||||
minimum extent necessary to make it enforceable. If the provision
|
||||
cannot be reformed, it shall be severed from this Public License
|
||||
without affecting the enforceability of the remaining terms and
|
||||
conditions.
|
||||
|
||||
c. No term or condition of this Public License will be waived and no
|
||||
failure to comply consented to unless expressly agreed to by the
|
||||
Licensor.
|
||||
|
||||
d. Nothing in this Public License constitutes or may be interpreted as
|
||||
a limitation upon, or waiver of, any privileges and immunities that
|
||||
apply to the Licensor or You, including from the legal processes of
|
||||
any jurisdiction or authority.
|
||||
|
||||
Creative Commons is not a party to its public licenses. Notwithstanding,
|
||||
Creative Commons may elect to apply one of its public licenses to material
|
||||
it publishes and in those instances will be considered the "Licensor." The
|
||||
text of the Creative Commons public licenses is dedicated to the public
|
||||
domain under the CC0 Public Domain Dedication. Except for the limited
|
||||
purpose of indicating that material is shared under a Creative Commons
|
||||
public license or as otherwise permitted by the Creative Commons policies
|
||||
published at creativecommons.org/policies, Creative Commons does not
|
||||
authorize the use of the trademark "Creative Commons" or any other
|
||||
trademark or logo of Creative Commons without its prior written consent
|
||||
including, without limitation, in connection with any unauthorized
|
||||
modifications to any of its public licenses or any other arrangements,
|
||||
understandings, or agreements concerning use of licensed material. For the
|
||||
avoidance of doubt, this paragraph does not form part of the public
|
||||
licenses.
|
||||
|
||||
Creative Commons may be contacted at creativecommons.org.
|
17
MAINTAINERS
17
MAINTAINERS
|
@ -3006,6 +3006,14 @@ S: Supported
|
|||
F: drivers/gpio/gpio-brcmstb.c
|
||||
F: Documentation/devicetree/bindings/gpio/brcm,brcmstb-gpio.txt
|
||||
|
||||
BROADCOM BRCMSTB I2C DRIVER
|
||||
M: Kamal Dasu <kdasu.kdev@gmail.com>
|
||||
L: linux-i2c@vger.kernel.org
|
||||
L: bcm-kernel-feedback-list@broadcom.com
|
||||
S: Supported
|
||||
F: drivers/i2c/busses/i2c-brcmstb.c
|
||||
F: Documentation/devicetree/bindings/i2c/i2c-brcmstb.txt
|
||||
|
||||
BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER
|
||||
M: Al Cooper <alcooperx@gmail.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
|
@ -3673,6 +3681,12 @@ S: Maintained
|
|||
F: Documentation/devicetree/bindings/media/coda.txt
|
||||
F: drivers/media/platform/coda/
|
||||
|
||||
CODE OF CONDUCT
|
||||
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
||||
S: Supported
|
||||
F: Documentation/process/code-of-conduct.rst
|
||||
F: Documentation/process/code-of-conduct-interpretation.rst
|
||||
|
||||
COMMON CLK FRAMEWORK
|
||||
M: Michael Turquette <mturquette@baylibre.com>
|
||||
M: Stephen Boyd <sboyd@kernel.org>
|
||||
|
@ -10130,7 +10144,6 @@ L: netdev@vger.kernel.org
|
|||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec.git
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next.git
|
||||
S: Maintained
|
||||
F: net/core/flow.c
|
||||
F: net/xfrm/
|
||||
F: net/key/
|
||||
F: net/ipv4/xfrm*
|
||||
|
@ -13070,7 +13083,7 @@ SELINUX SECURITY MODULE
|
|||
M: Paul Moore <paul@paul-moore.com>
|
||||
M: Stephen Smalley <sds@tycho.nsa.gov>
|
||||
M: Eric Paris <eparis@parisplace.org>
|
||||
L: selinux@tycho.nsa.gov (moderated for non-subscribers)
|
||||
L: selinux@vger.kernel.org
|
||||
W: https://selinuxproject.org
|
||||
W: https://github.com/SELinuxProject
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux.git
|
||||
|
|
4
Makefile
4
Makefile
|
@ -2,8 +2,8 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc8
|
||||
NAME = Merciless Moray
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
# *DOCUMENTATION*
|
||||
# To see a list of typical targets execute "make help"
|
||||
|
|
|
@ -426,7 +426,7 @@ void unwind_frame_init_task(struct unwind_frame_info *info,
|
|||
r.gr[30] = get_parisc_stackpointer();
|
||||
regs = &r;
|
||||
}
|
||||
unwind_frame_init(info, task, &r);
|
||||
unwind_frame_init(info, task, regs);
|
||||
} else {
|
||||
unwind_frame_init_from_blocked_task(info, task);
|
||||
}
|
||||
|
|
|
@ -28,7 +28,7 @@ typedef struct {
|
|||
unsigned short sock_id; /* physical package */
|
||||
unsigned short core_id;
|
||||
unsigned short max_cache_id; /* groupings of highest shared cache */
|
||||
unsigned short proc_id; /* strand (aka HW thread) id */
|
||||
signed short proc_id; /* strand (aka HW thread) id */
|
||||
} cpuinfo_sparc;
|
||||
|
||||
DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
|
||||
|
|
|
@ -427,8 +427,9 @@
|
|||
#define __NR_preadv2 358
|
||||
#define __NR_pwritev2 359
|
||||
#define __NR_statx 360
|
||||
#define __NR_io_pgetevents 361
|
||||
|
||||
#define NR_syscalls 361
|
||||
#define NR_syscalls 362
|
||||
|
||||
/* Bitmask values returned from kern_features system call. */
|
||||
#define KERN_FEATURE_MIXED_MODE_STACK 0x00000001
|
||||
|
|
|
@ -115,8 +115,8 @@ static int auxio_probe(struct platform_device *dev)
|
|||
auxio_devtype = AUXIO_TYPE_SBUS;
|
||||
size = 1;
|
||||
} else {
|
||||
printk("auxio: Unknown parent bus type [%pOFn]\n",
|
||||
dp->parent);
|
||||
printk("auxio: Unknown parent bus type [%s]\n",
|
||||
dp->parent->name);
|
||||
return -ENODEV;
|
||||
}
|
||||
auxio_register = of_ioremap(&dev->resource[0], 0, size, "auxio");
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <asm/cpudata.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/sched/clock.h>
|
||||
#include <asm/nmi.h>
|
||||
#include <asm/pcr.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
|
|||
sparc_perf_event_update(cp, &cp->hw,
|
||||
cpuc->current_idx[i]);
|
||||
cpuc->current_idx[i] = PIC_NO_INDEX;
|
||||
if (cp->hw.state & PERF_HES_STOPPED)
|
||||
cp->hw.state |= PERF_HES_ARCH;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
|
|||
|
||||
enc = perf_event_get_enc(cpuc->events[i]);
|
||||
cpuc->pcr[0] &= ~mask_for_index(idx);
|
||||
if (hwc->state & PERF_HES_STOPPED)
|
||||
if (hwc->state & PERF_HES_ARCH) {
|
||||
cpuc->pcr[0] |= nop_for_index(idx);
|
||||
else
|
||||
} else {
|
||||
cpuc->pcr[0] |= event_encoding(enc, idx);
|
||||
hwc->state = 0;
|
||||
}
|
||||
}
|
||||
out:
|
||||
cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
|
||||
|
@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
|
|||
|
||||
cpuc->current_idx[i] = idx;
|
||||
|
||||
if (cp->hw.state & PERF_HES_ARCH)
|
||||
continue;
|
||||
|
||||
sparc_pmu_start(cp, PERF_EF_RELOAD);
|
||||
}
|
||||
out:
|
||||
|
@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
|
|||
event->hw.state = 0;
|
||||
|
||||
sparc_pmu_enable_event(cpuc, &event->hw, idx);
|
||||
|
||||
perf_event_update_userpage(event);
|
||||
}
|
||||
|
||||
static void sparc_pmu_stop(struct perf_event *event, int flags)
|
||||
|
@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
|
|||
cpuc->events[n0] = event->hw.event_base;
|
||||
cpuc->current_idx[n0] = PIC_NO_INDEX;
|
||||
|
||||
event->hw.state = PERF_HES_UPTODATE;
|
||||
event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
|
||||
if (!(ef_flags & PERF_EF_START))
|
||||
event->hw.state |= PERF_HES_STOPPED;
|
||||
event->hw.state |= PERF_HES_ARCH;
|
||||
|
||||
/*
|
||||
* If group events scheduling transaction was started,
|
||||
|
@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
|
|||
struct perf_sample_data data;
|
||||
struct cpu_hw_events *cpuc;
|
||||
struct pt_regs *regs;
|
||||
u64 finish_clock;
|
||||
u64 start_clock;
|
||||
int i;
|
||||
|
||||
if (!atomic_read(&active_events))
|
||||
|
@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
|
|||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
start_clock = sched_clock();
|
||||
|
||||
regs = args->regs;
|
||||
|
||||
cpuc = this_cpu_ptr(&cpu_hw_events);
|
||||
|
@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
|
|||
sparc_pmu_stop(event, 0);
|
||||
}
|
||||
|
||||
finish_clock = sched_clock();
|
||||
|
||||
perf_sample_event_took(finish_clock - start_clock);
|
||||
|
||||
return NOTIFY_STOP;
|
||||
}
|
||||
|
||||
|
|
|
@ -41,8 +41,8 @@ static int power_probe(struct platform_device *op)
|
|||
|
||||
power_reg = of_ioremap(res, 0, 0x4, "power");
|
||||
|
||||
printk(KERN_INFO "%pOFn: Control reg at %llx\n",
|
||||
op->dev.of_node, res->start);
|
||||
printk(KERN_INFO "%s: Control reg at %llx\n",
|
||||
op->dev.of_node->name, res->start);
|
||||
|
||||
if (has_button_interrupt(irq, op->dev.of_node)) {
|
||||
if (request_irq(irq,
|
||||
|
|
|
@ -68,8 +68,8 @@ static void __init sparc32_path_component(struct device_node *dp, char *tmp_buf)
|
|||
return;
|
||||
|
||||
regs = rprop->value;
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
regs->which_io, regs->phys_addr);
|
||||
}
|
||||
|
||||
|
@ -84,8 +84,8 @@ static void __init sbus_path_component(struct device_node *dp, char *tmp_buf)
|
|||
return;
|
||||
|
||||
regs = prop->value;
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
regs->which_io,
|
||||
regs->phys_addr);
|
||||
}
|
||||
|
@ -104,13 +104,13 @@ static void __init pci_path_component(struct device_node *dp, char *tmp_buf)
|
|||
regs = prop->value;
|
||||
devfn = (regs->phys_hi >> 8) & 0xff;
|
||||
if (devfn & 0x07) {
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
devfn >> 3,
|
||||
devfn & 0x07);
|
||||
} else {
|
||||
sprintf(tmp_buf, "%pOFn@%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x",
|
||||
dp->name,
|
||||
devfn >> 3);
|
||||
}
|
||||
}
|
||||
|
@ -127,8 +127,8 @@ static void __init ebus_path_component(struct device_node *dp, char *tmp_buf)
|
|||
|
||||
regs = prop->value;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
regs->which_io, regs->phys_addr);
|
||||
}
|
||||
|
||||
|
@ -167,8 +167,8 @@ static void __init ambapp_path_component(struct device_node *dp, char *tmp_buf)
|
|||
return;
|
||||
device = prop->value;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn:%d:%d@%x,%x",
|
||||
dp, *vendor, *device,
|
||||
sprintf(tmp_buf, "%s:%d:%d@%x,%x",
|
||||
dp->name, *vendor, *device,
|
||||
*intr, reg0);
|
||||
}
|
||||
|
||||
|
@ -201,7 +201,7 @@ char * __init build_path_component(struct device_node *dp)
|
|||
tmp_buf[0] = '\0';
|
||||
__build_path_component(dp, tmp_buf);
|
||||
if (tmp_buf[0] == '\0')
|
||||
snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp);
|
||||
strcpy(tmp_buf, dp->name);
|
||||
|
||||
n = prom_early_alloc(strlen(tmp_buf) + 1);
|
||||
strcpy(n, tmp_buf);
|
||||
|
|
|
@ -82,8 +82,8 @@ static void __init sun4v_path_component(struct device_node *dp, char *tmp_buf)
|
|||
|
||||
regs = rprop->value;
|
||||
if (!of_node_is_root(dp->parent)) {
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
(unsigned int) (regs->phys_addr >> 32UL),
|
||||
(unsigned int) (regs->phys_addr & 0xffffffffUL));
|
||||
return;
|
||||
|
@ -97,17 +97,17 @@ static void __init sun4v_path_component(struct device_node *dp, char *tmp_buf)
|
|||
const char *prefix = (type == 0) ? "m" : "i";
|
||||
|
||||
if (low_bits)
|
||||
sprintf(tmp_buf, "%pOFn@%s%x,%x",
|
||||
dp, prefix,
|
||||
sprintf(tmp_buf, "%s@%s%x,%x",
|
||||
dp->name, prefix,
|
||||
high_bits, low_bits);
|
||||
else
|
||||
sprintf(tmp_buf, "%pOFn@%s%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%s%x",
|
||||
dp->name,
|
||||
prefix,
|
||||
high_bits);
|
||||
} else if (type == 12) {
|
||||
sprintf(tmp_buf, "%pOFn@%x",
|
||||
dp, high_bits);
|
||||
sprintf(tmp_buf, "%s@%x",
|
||||
dp->name, high_bits);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -122,8 +122,8 @@ static void __init sun4u_path_component(struct device_node *dp, char *tmp_buf)
|
|||
|
||||
regs = prop->value;
|
||||
if (!of_node_is_root(dp->parent)) {
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
(unsigned int) (regs->phys_addr >> 32UL),
|
||||
(unsigned int) (regs->phys_addr & 0xffffffffUL));
|
||||
return;
|
||||
|
@ -138,8 +138,8 @@ static void __init sun4u_path_component(struct device_node *dp, char *tmp_buf)
|
|||
if (tlb_type >= cheetah)
|
||||
mask = 0x7fffff;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
*(u32 *)prop->value,
|
||||
(unsigned int) (regs->phys_addr & mask));
|
||||
}
|
||||
|
@ -156,8 +156,8 @@ static void __init sbus_path_component(struct device_node *dp, char *tmp_buf)
|
|||
return;
|
||||
|
||||
regs = prop->value;
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
regs->which_io,
|
||||
regs->phys_addr);
|
||||
}
|
||||
|
@ -176,13 +176,13 @@ static void __init pci_path_component(struct device_node *dp, char *tmp_buf)
|
|||
regs = prop->value;
|
||||
devfn = (regs->phys_hi >> 8) & 0xff;
|
||||
if (devfn & 0x07) {
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
devfn >> 3,
|
||||
devfn & 0x07);
|
||||
} else {
|
||||
sprintf(tmp_buf, "%pOFn@%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x",
|
||||
dp->name,
|
||||
devfn >> 3);
|
||||
}
|
||||
}
|
||||
|
@ -203,8 +203,8 @@ static void __init upa_path_component(struct device_node *dp, char *tmp_buf)
|
|||
if (!prop)
|
||||
return;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
*(u32 *) prop->value,
|
||||
(unsigned int) (regs->phys_addr & 0xffffffffUL));
|
||||
}
|
||||
|
@ -221,7 +221,7 @@ static void __init vdev_path_component(struct device_node *dp, char *tmp_buf)
|
|||
|
||||
regs = prop->value;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn@%x", dp, *regs);
|
||||
sprintf(tmp_buf, "%s@%x", dp->name, *regs);
|
||||
}
|
||||
|
||||
/* "name@addrhi,addrlo" */
|
||||
|
@ -236,8 +236,8 @@ static void __init ebus_path_component(struct device_node *dp, char *tmp_buf)
|
|||
|
||||
regs = prop->value;
|
||||
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp,
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name,
|
||||
(unsigned int) (regs->phys_addr >> 32UL),
|
||||
(unsigned int) (regs->phys_addr & 0xffffffffUL));
|
||||
}
|
||||
|
@ -257,8 +257,8 @@ static void __init i2c_path_component(struct device_node *dp, char *tmp_buf)
|
|||
/* This actually isn't right... should look at the #address-cells
|
||||
* property of the i2c bus node etc. etc.
|
||||
*/
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp, regs[0], regs[1]);
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name, regs[0], regs[1]);
|
||||
}
|
||||
|
||||
/* "name@reg0[,reg1]" */
|
||||
|
@ -274,11 +274,11 @@ static void __init usb_path_component(struct device_node *dp, char *tmp_buf)
|
|||
regs = prop->value;
|
||||
|
||||
if (prop->length == sizeof(u32) || regs[1] == 1) {
|
||||
sprintf(tmp_buf, "%pOFn@%x",
|
||||
dp, regs[0]);
|
||||
sprintf(tmp_buf, "%s@%x",
|
||||
dp->name, regs[0]);
|
||||
} else {
|
||||
sprintf(tmp_buf, "%pOFn@%x,%x",
|
||||
dp, regs[0], regs[1]);
|
||||
sprintf(tmp_buf, "%s@%x,%x",
|
||||
dp->name, regs[0], regs[1]);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -295,11 +295,11 @@ static void __init ieee1394_path_component(struct device_node *dp, char *tmp_buf
|
|||
regs = prop->value;
|
||||
|
||||
if (regs[2] || regs[3]) {
|
||||
sprintf(tmp_buf, "%pOFn@%08x%08x,%04x%08x",
|
||||
dp, regs[0], regs[1], regs[2], regs[3]);
|
||||
sprintf(tmp_buf, "%s@%08x%08x,%04x%08x",
|
||||
dp->name, regs[0], regs[1], regs[2], regs[3]);
|
||||
} else {
|
||||
sprintf(tmp_buf, "%pOFn@%08x%08x",
|
||||
dp, regs[0], regs[1]);
|
||||
sprintf(tmp_buf, "%s@%08x%08x",
|
||||
dp->name, regs[0], regs[1]);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -361,7 +361,7 @@ char * __init build_path_component(struct device_node *dp)
|
|||
tmp_buf[0] = '\0';
|
||||
__build_path_component(dp, tmp_buf);
|
||||
if (tmp_buf[0] == '\0')
|
||||
snprintf(tmp_buf, sizeof(tmp_buf), "%pOFn", dp);
|
||||
strcpy(tmp_buf, dp->name);
|
||||
|
||||
n = prom_early_alloc(strlen(tmp_buf) + 1);
|
||||
strcpy(n, tmp_buf);
|
||||
|
|
|
@ -84,8 +84,9 @@ __handle_signal:
|
|||
ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
|
||||
sethi %hi(0xf << 20), %l4
|
||||
and %l1, %l4, %l4
|
||||
andn %l1, %l4, %l1
|
||||
ba,pt %xcc, __handle_preemption_continue
|
||||
andn %l1, %l4, %l1
|
||||
srl %l4, 20, %l4
|
||||
|
||||
/* When returning from a NMI (%pil==15) interrupt we want to
|
||||
* avoid running softirqs, doing IRQ tracing, preempting, etc.
|
||||
|
|
|
@ -90,4 +90,4 @@ sys_call_table:
|
|||
/*345*/ .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
|
||||
/*350*/ .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
|
||||
/*355*/ .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
|
||||
/*360*/ .long sys_statx
|
||||
/*360*/ .long sys_statx, sys_io_pgetevents
|
||||
|
|
|
@ -91,7 +91,7 @@ sys_call_table32:
|
|||
.word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
|
||||
/*350*/ .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
|
||||
.word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2
|
||||
/*360*/ .word sys_statx
|
||||
/*360*/ .word sys_statx, compat_sys_io_pgetevents
|
||||
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
|
@ -173,4 +173,4 @@ sys_call_table:
|
|||
.word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
|
||||
/*350*/ .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
|
||||
.word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
|
||||
/*360*/ .word sys_statx
|
||||
/*360*/ .word sys_statx, sys_io_pgetevents
|
||||
|
|
|
@ -33,9 +33,19 @@
|
|||
#define TICK_PRIV_BIT (1ULL << 63)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SPARC64
|
||||
#define SYSCALL_STRING \
|
||||
"ta 0x6d;" \
|
||||
"sub %%g0, %%o0, %%o0;" \
|
||||
"bcs,a 1f;" \
|
||||
" sub %%g0, %%o0, %%o0;" \
|
||||
"1:"
|
||||
#else
|
||||
#define SYSCALL_STRING \
|
||||
"ta 0x10;" \
|
||||
"bcs,a 1f;" \
|
||||
" sub %%g0, %%o0, %%o0;" \
|
||||
"1:"
|
||||
#endif
|
||||
|
||||
#define SYSCALL_CLOBBERS \
|
||||
"f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", \
|
||||
|
|
|
@ -262,7 +262,9 @@ static __init int vdso_setup(char *s)
|
|||
unsigned long val;
|
||||
|
||||
err = kstrtoul(s, 10, &val);
|
||||
if (err)
|
||||
return err;
|
||||
vdso_enabled = val;
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
__setup("vdso=", vdso_setup);
|
||||
|
|
|
@ -37,6 +37,7 @@ KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
|
|||
KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, gnu)
|
||||
KBUILD_CFLAGS += -Wno-pointer-sign
|
||||
|
||||
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
|
||||
GCOV_PROFILE := n
|
||||
|
|
|
@ -389,6 +389,13 @@
|
|||
* that register for the time this macro runs
|
||||
*/
|
||||
|
||||
/*
|
||||
* The high bits of the CS dword (__csh) are used for
|
||||
* CS_FROM_ENTRY_STACK and CS_FROM_USER_CR3. Clear them in case
|
||||
* hardware didn't do this for us.
|
||||
*/
|
||||
andl $(0x0000ffff), PT_CS(%esp)
|
||||
|
||||
/* Are we on the entry stack? Bail out if not! */
|
||||
movl PER_CPU_VAR(cpu_entry_area), %ecx
|
||||
addl $CPU_ENTRY_AREA_entry_stack + SIZEOF_entry_stack, %ecx
|
||||
|
@ -407,12 +414,6 @@
|
|||
/* Load top of task-stack into %edi */
|
||||
movl TSS_entry2task_stack(%edi), %edi
|
||||
|
||||
/*
|
||||
* Clear unused upper bits of the dword containing the word-sized CS
|
||||
* slot in pt_regs in case hardware didn't clear it for us.
|
||||
*/
|
||||
andl $(0x0000ffff), PT_CS(%esp)
|
||||
|
||||
/* Special case - entry from kernel mode via entry stack */
|
||||
#ifdef CONFIG_VM86
|
||||
movl PT_EFLAGS(%esp), %ecx # mix EFLAGS and CS
|
||||
|
|
|
@ -1187,6 +1187,16 @@ ENTRY(paranoid_entry)
|
|||
xorl %ebx, %ebx
|
||||
|
||||
1:
|
||||
/*
|
||||
* Always stash CR3 in %r14. This value will be restored,
|
||||
* verbatim, at exit. Needed if paranoid_entry interrupted
|
||||
* another entry that already switched to the user CR3 value
|
||||
* but has not yet returned to userspace.
|
||||
*
|
||||
* This is also why CS (stashed in the "iret frame" by the
|
||||
* hardware at entry) can not be used: this may be a return
|
||||
* to kernel code, but with a user CR3 value.
|
||||
*/
|
||||
SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
|
||||
|
||||
ret
|
||||
|
@ -1211,11 +1221,13 @@ ENTRY(paranoid_exit)
|
|||
testl %ebx, %ebx /* swapgs needed? */
|
||||
jnz .Lparanoid_exit_no_swapgs
|
||||
TRACE_IRQS_IRETQ
|
||||
/* Always restore stashed CR3 value (see paranoid_entry) */
|
||||
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
|
||||
SWAPGS_UNSAFE_STACK
|
||||
jmp .Lparanoid_exit_restore
|
||||
.Lparanoid_exit_no_swapgs:
|
||||
TRACE_IRQS_IRETQ_DEBUG
|
||||
/* Always restore stashed CR3 value (see paranoid_entry) */
|
||||
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
|
||||
.Lparanoid_exit_restore:
|
||||
jmp restore_regs_and_return_to_kernel
|
||||
|
@ -1626,6 +1638,7 @@ end_repeat_nmi:
|
|||
movq $-1, %rsi
|
||||
call do_nmi
|
||||
|
||||
/* Always restore stashed CR3 value (see paranoid_entry) */
|
||||
RESTORE_CR3 scratch_reg=%r15 save_reg=%r14
|
||||
|
||||
testl %ebx, %ebx /* swapgs needed? */
|
||||
|
|
|
@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu)
|
|||
static inline void
|
||||
switch_fpu_prepare(struct fpu *old_fpu, int cpu)
|
||||
{
|
||||
if (old_fpu->initialized) {
|
||||
if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) {
|
||||
if (!copy_fpregs_to_fpstate(old_fpu))
|
||||
old_fpu->last_cpu = -1;
|
||||
else
|
||||
|
|
|
@ -185,22 +185,22 @@ do { \
|
|||
typeof(var) pfo_ret__; \
|
||||
switch (sizeof(var)) { \
|
||||
case 1: \
|
||||
asm(op "b "__percpu_arg(1)",%0" \
|
||||
asm volatile(op "b "__percpu_arg(1)",%0"\
|
||||
: "=q" (pfo_ret__) \
|
||||
: "m" (var)); \
|
||||
break; \
|
||||
case 2: \
|
||||
asm(op "w "__percpu_arg(1)",%0" \
|
||||
asm volatile(op "w "__percpu_arg(1)",%0"\
|
||||
: "=r" (pfo_ret__) \
|
||||
: "m" (var)); \
|
||||
break; \
|
||||
case 4: \
|
||||
asm(op "l "__percpu_arg(1)",%0" \
|
||||
asm volatile(op "l "__percpu_arg(1)",%0"\
|
||||
: "=r" (pfo_ret__) \
|
||||
: "m" (var)); \
|
||||
break; \
|
||||
case 8: \
|
||||
asm(op "q "__percpu_arg(1)",%0" \
|
||||
asm volatile(op "q "__percpu_arg(1)",%0"\
|
||||
: "=r" (pfo_ret__) \
|
||||
: "m" (var)); \
|
||||
break; \
|
||||
|
|
|
@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
|
|||
* thread's fpu state, reconstruct fxstate from the fsave
|
||||
* header. Validate and sanitize the copied state.
|
||||
*/
|
||||
struct fpu *fpu = &tsk->thread.fpu;
|
||||
struct user_i387_ia32_struct env;
|
||||
int err = 0;
|
||||
|
||||
|
|
|
@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
|
|||
int __init pci_swiotlb_detect_4gb(void)
|
||||
{
|
||||
/* don't initialize swiotlb if iommu=off (no_iommu=1) */
|
||||
#ifdef CONFIG_X86_64
|
||||
if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
|
||||
swiotlb = 1;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If SME is active then swiotlb will be set to 1 so that bounce
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
#include <asm/time.h>
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
__visible volatile unsigned long jiffies __cacheline_aligned = INITIAL_JIFFIES;
|
||||
__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
|
||||
#endif
|
||||
|
||||
unsigned long profile_pc(struct pt_regs *regs)
|
||||
|
|
|
@ -58,7 +58,7 @@ struct cyc2ns {
|
|||
|
||||
static DEFINE_PER_CPU_ALIGNED(struct cyc2ns, cyc2ns);
|
||||
|
||||
void cyc2ns_read_begin(struct cyc2ns_data *data)
|
||||
void __always_inline cyc2ns_read_begin(struct cyc2ns_data *data)
|
||||
{
|
||||
int seq, idx;
|
||||
|
||||
|
@ -75,7 +75,7 @@ void cyc2ns_read_begin(struct cyc2ns_data *data)
|
|||
} while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence)));
|
||||
}
|
||||
|
||||
void cyc2ns_read_end(void)
|
||||
void __always_inline cyc2ns_read_end(void)
|
||||
{
|
||||
preempt_enable_notrace();
|
||||
}
|
||||
|
@ -104,7 +104,7 @@ void cyc2ns_read_end(void)
|
|||
* -johnstul@us.ibm.com "math is hard, lets go shopping!"
|
||||
*/
|
||||
|
||||
static inline unsigned long long cycles_2_ns(unsigned long long cyc)
|
||||
static __always_inline unsigned long long cycles_2_ns(unsigned long long cyc)
|
||||
{
|
||||
struct cyc2ns_data data;
|
||||
unsigned long long ns;
|
||||
|
|
|
@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|||
{
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
struct bio *bio = *biop;
|
||||
unsigned int granularity;
|
||||
unsigned int op;
|
||||
int alignment;
|
||||
sector_t bs_mask;
|
||||
|
||||
if (!q)
|
||||
|
@ -54,38 +52,16 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
|||
if ((sector | nr_sects) & bs_mask)
|
||||
return -EINVAL;
|
||||
|
||||
/* Zero-sector (unknown) and one-sector granularities are the same. */
|
||||
granularity = max(q->limits.discard_granularity >> 9, 1U);
|
||||
alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
|
||||
|
||||
while (nr_sects) {
|
||||
unsigned int req_sects;
|
||||
sector_t end_sect, tmp;
|
||||
unsigned int req_sects = nr_sects;
|
||||
sector_t end_sect;
|
||||
|
||||
/*
|
||||
* Issue in chunks of the user defined max discard setting,
|
||||
* ensuring that bi_size doesn't overflow
|
||||
*/
|
||||
req_sects = min_t(sector_t, nr_sects,
|
||||
q->limits.max_discard_sectors);
|
||||
if (!req_sects)
|
||||
goto fail;
|
||||
if (req_sects > UINT_MAX >> 9)
|
||||
req_sects = UINT_MAX >> 9;
|
||||
|
||||
/*
|
||||
* If splitting a request, and the next starting sector would be
|
||||
* misaligned, stop the discard at the previous aligned sector.
|
||||
*/
|
||||
end_sect = sector + req_sects;
|
||||
tmp = end_sect;
|
||||
if (req_sects < nr_sects &&
|
||||
sector_div(tmp, granularity) != alignment) {
|
||||
end_sect = end_sect - alignment;
|
||||
sector_div(end_sect, granularity);
|
||||
end_sect = end_sect * granularity + alignment;
|
||||
req_sects = end_sect - sector;
|
||||
}
|
||||
|
||||
bio = next_bio(bio, 0, gfp_mask);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
|
|
|
@ -36,6 +36,10 @@ MODULE_VERSION(DRV_MODULE_VERSION);
|
|||
#define VDC_TX_RING_SIZE 512
|
||||
#define VDC_DEFAULT_BLK_SIZE 512
|
||||
|
||||
#define MAX_XFER_BLKS (128 * 1024)
|
||||
#define MAX_XFER_SIZE (MAX_XFER_BLKS / VDC_DEFAULT_BLK_SIZE)
|
||||
#define MAX_RING_COOKIES ((MAX_XFER_BLKS / PAGE_SIZE) + 2)
|
||||
|
||||
#define WAITING_FOR_LINK_UP 0x01
|
||||
#define WAITING_FOR_TX_SPACE 0x02
|
||||
#define WAITING_FOR_GEN_CMD 0x04
|
||||
|
@ -450,7 +454,7 @@ static int __send_request(struct request *req)
|
|||
{
|
||||
struct vdc_port *port = req->rq_disk->private_data;
|
||||
struct vio_dring_state *dr = &port->vio.drings[VIO_DRIVER_TX_RING];
|
||||
struct scatterlist sg[port->ring_cookies];
|
||||
struct scatterlist sg[MAX_RING_COOKIES];
|
||||
struct vdc_req_entry *rqe;
|
||||
struct vio_disk_desc *desc;
|
||||
unsigned int map_perm;
|
||||
|
@ -458,6 +462,9 @@ static int __send_request(struct request *req)
|
|||
u64 len;
|
||||
u8 op;
|
||||
|
||||
if (WARN_ON(port->ring_cookies > MAX_RING_COOKIES))
|
||||
return -EINVAL;
|
||||
|
||||
map_perm = LDC_MAP_SHADOW | LDC_MAP_DIRECT | LDC_MAP_IO;
|
||||
|
||||
if (rq_data_dir(req) == READ) {
|
||||
|
@ -984,9 +991,8 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
|||
goto err_out_free_port;
|
||||
|
||||
port->vdisk_block_size = VDC_DEFAULT_BLK_SIZE;
|
||||
port->max_xfer_size = ((128 * 1024) / port->vdisk_block_size);
|
||||
port->ring_cookies = ((port->max_xfer_size *
|
||||
port->vdisk_block_size) / PAGE_SIZE) + 2;
|
||||
port->max_xfer_size = MAX_XFER_SIZE;
|
||||
port->ring_cookies = MAX_RING_COOKIES;
|
||||
|
||||
err = vio_ldc_alloc(&port->vio, &vdc_ldc_cfg, port);
|
||||
if (err)
|
||||
|
|
|
@ -1434,8 +1434,16 @@ static void __init sun4i_ccu_init(struct device_node *node,
|
|||
return;
|
||||
}
|
||||
|
||||
/* Force the PLL-Audio-1x divider to 1 */
|
||||
val = readl(reg + SUN4I_PLL_AUDIO_REG);
|
||||
|
||||
/*
|
||||
* Force VCO and PLL bias current to lowest setting. Higher
|
||||
* settings interfere with sigma-delta modulation and result
|
||||
* in audible noise and distortions when using SPDIF or I2S.
|
||||
*/
|
||||
val &= ~GENMASK(25, 16);
|
||||
|
||||
/* Force the PLL-Audio-1x divider to 1 */
|
||||
val &= ~GENMASK(29, 26);
|
||||
writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG);
|
||||
|
||||
|
|
|
@ -174,6 +174,11 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
|
|||
state->crtcs[i].state = NULL;
|
||||
state->crtcs[i].old_state = NULL;
|
||||
state->crtcs[i].new_state = NULL;
|
||||
|
||||
if (state->crtcs[i].commit) {
|
||||
drm_crtc_commit_put(state->crtcs[i].commit);
|
||||
state->crtcs[i].commit = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < config->num_total_plane; i++) {
|
||||
|
|
|
@ -1408,15 +1408,16 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
|
|||
void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
|
||||
struct drm_atomic_state *old_state)
|
||||
{
|
||||
struct drm_crtc_state *new_crtc_state;
|
||||
struct drm_crtc *crtc;
|
||||
int i;
|
||||
|
||||
for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
|
||||
struct drm_crtc_commit *commit = new_crtc_state->commit;
|
||||
for (i = 0; i < dev->mode_config.num_crtc; i++) {
|
||||
struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
|
||||
int ret;
|
||||
|
||||
if (!commit)
|
||||
crtc = old_state->crtcs[i].ptr;
|
||||
|
||||
if (!crtc || !commit)
|
||||
continue;
|
||||
|
||||
ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
|
||||
|
@ -1934,6 +1935,9 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
|
|||
drm_crtc_commit_get(commit);
|
||||
|
||||
commit->abort_completion = true;
|
||||
|
||||
state->crtcs[i].commit = commit;
|
||||
drm_crtc_commit_get(commit);
|
||||
}
|
||||
|
||||
for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
|
||||
|
|
|
@ -567,9 +567,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
|
|||
struct drm_mode_crtc *crtc_req = data;
|
||||
struct drm_crtc *crtc;
|
||||
struct drm_plane *plane;
|
||||
struct drm_connector **connector_set = NULL, *connector;
|
||||
struct drm_framebuffer *fb = NULL;
|
||||
struct drm_display_mode *mode = NULL;
|
||||
struct drm_connector **connector_set, *connector;
|
||||
struct drm_framebuffer *fb;
|
||||
struct drm_display_mode *mode;
|
||||
struct drm_mode_set set;
|
||||
uint32_t __user *set_connectors_ptr;
|
||||
struct drm_modeset_acquire_ctx ctx;
|
||||
|
@ -598,6 +598,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
|
|||
mutex_lock(&crtc->dev->mode_config.mutex);
|
||||
drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
|
||||
retry:
|
||||
connector_set = NULL;
|
||||
fb = NULL;
|
||||
mode = NULL;
|
||||
|
||||
ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
|
|
@ -121,6 +121,9 @@ static const struct edid_quirk {
|
|||
/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
|
||||
{ "AEO", 0, EDID_QUIRK_FORCE_6BPC },
|
||||
|
||||
/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
|
||||
{ "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC },
|
||||
|
||||
/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
|
||||
{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
|
||||
|
||||
|
@ -4443,7 +4446,7 @@ static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
|
|||
struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
|
||||
|
||||
dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
|
||||
hdmi->y420_dc_modes |= dc_mask;
|
||||
hdmi->y420_dc_modes = dc_mask;
|
||||
}
|
||||
|
||||
static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
|
||||
|
|
|
@ -1580,6 +1580,25 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
|
|||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_ioctl);
|
||||
|
||||
static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1,
|
||||
const struct fb_var_screeninfo *var_2)
|
||||
{
|
||||
return var_1->bits_per_pixel == var_2->bits_per_pixel &&
|
||||
var_1->grayscale == var_2->grayscale &&
|
||||
var_1->red.offset == var_2->red.offset &&
|
||||
var_1->red.length == var_2->red.length &&
|
||||
var_1->red.msb_right == var_2->red.msb_right &&
|
||||
var_1->green.offset == var_2->green.offset &&
|
||||
var_1->green.length == var_2->green.length &&
|
||||
var_1->green.msb_right == var_2->green.msb_right &&
|
||||
var_1->blue.offset == var_2->blue.offset &&
|
||||
var_1->blue.length == var_2->blue.length &&
|
||||
var_1->blue.msb_right == var_2->blue.msb_right &&
|
||||
var_1->transp.offset == var_2->transp.offset &&
|
||||
var_1->transp.length == var_2->transp.length &&
|
||||
var_1->transp.msb_right == var_2->transp.msb_right;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
|
||||
* @var: screeninfo to check
|
||||
|
@ -1590,7 +1609,6 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
|
|||
{
|
||||
struct drm_fb_helper *fb_helper = info->par;
|
||||
struct drm_framebuffer *fb = fb_helper->fb;
|
||||
int depth;
|
||||
|
||||
if (var->pixclock != 0 || in_dbg_master())
|
||||
return -EINVAL;
|
||||
|
@ -1610,72 +1628,15 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (var->bits_per_pixel) {
|
||||
case 16:
|
||||
depth = (var->green.length == 6) ? 16 : 15;
|
||||
break;
|
||||
case 32:
|
||||
depth = (var->transp.length > 0) ? 32 : 24;
|
||||
break;
|
||||
default:
|
||||
depth = var->bits_per_pixel;
|
||||
break;
|
||||
}
|
||||
|
||||
switch (depth) {
|
||||
case 8:
|
||||
var->red.offset = 0;
|
||||
var->green.offset = 0;
|
||||
var->blue.offset = 0;
|
||||
var->red.length = 8;
|
||||
var->green.length = 8;
|
||||
var->blue.length = 8;
|
||||
var->transp.length = 0;
|
||||
var->transp.offset = 0;
|
||||
break;
|
||||
case 15:
|
||||
var->red.offset = 10;
|
||||
var->green.offset = 5;
|
||||
var->blue.offset = 0;
|
||||
var->red.length = 5;
|
||||
var->green.length = 5;
|
||||
var->blue.length = 5;
|
||||
var->transp.length = 1;
|
||||
var->transp.offset = 15;
|
||||
break;
|
||||
case 16:
|
||||
var->red.offset = 11;
|
||||
var->green.offset = 5;
|
||||
var->blue.offset = 0;
|
||||
var->red.length = 5;
|
||||
var->green.length = 6;
|
||||
var->blue.length = 5;
|
||||
var->transp.length = 0;
|
||||
var->transp.offset = 0;
|
||||
break;
|
||||
case 24:
|
||||
var->red.offset = 16;
|
||||
var->green.offset = 8;
|
||||
var->blue.offset = 0;
|
||||
var->red.length = 8;
|
||||
var->green.length = 8;
|
||||
var->blue.length = 8;
|
||||
var->transp.length = 0;
|
||||
var->transp.offset = 0;
|
||||
break;
|
||||
case 32:
|
||||
var->red.offset = 16;
|
||||
var->green.offset = 8;
|
||||
var->blue.offset = 0;
|
||||
var->red.length = 8;
|
||||
var->green.length = 8;
|
||||
var->blue.length = 8;
|
||||
var->transp.length = 8;
|
||||
var->transp.offset = 24;
|
||||
break;
|
||||
default:
|
||||
/*
|
||||
* drm fbdev emulation doesn't support changing the pixel format at all,
|
||||
* so reject all pixel format changing requests.
|
||||
*/
|
||||
if (!drm_fb_pixel_format_equal(var, &info->var)) {
|
||||
DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_check_var);
|
||||
|
|
|
@ -81,9 +81,19 @@ static long sun4i_dclk_round_rate(struct clk_hw *hw, unsigned long rate,
|
|||
int i;
|
||||
|
||||
for (i = tcon->dclk_min_div; i <= tcon->dclk_max_div; i++) {
|
||||
unsigned long ideal = rate * i;
|
||||
u64 ideal = (u64)rate * i;
|
||||
unsigned long rounded;
|
||||
|
||||
/*
|
||||
* ideal has overflowed the max value that can be stored in an
|
||||
* unsigned long, and every clk operation we might do on a
|
||||
* truncated u64 value will give us incorrect results.
|
||||
* Let's just stop there since bigger dividers will result in
|
||||
* the same overflow issue.
|
||||
*/
|
||||
if (ideal > ULONG_MAX)
|
||||
goto out;
|
||||
|
||||
rounded = clk_hw_round_rate(clk_hw_get_parent(hw),
|
||||
ideal);
|
||||
|
||||
|
|
|
@ -806,8 +806,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
|
|||
|
||||
time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
|
||||
num * adap->timeout);
|
||||
if (!time_left) {
|
||||
|
||||
/* cleanup DMA if it couldn't complete properly due to an error */
|
||||
if (priv->dma_direction != DMA_NONE)
|
||||
rcar_i2c_cleanup_dma(priv);
|
||||
|
||||
if (!time_left) {
|
||||
rcar_i2c_init(priv);
|
||||
ret = -ETIMEDOUT;
|
||||
} else if (priv->flags & ID_NACK) {
|
||||
|
|
|
@ -46,6 +46,8 @@
|
|||
#include <linux/mutex.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include <rdma/ib.h>
|
||||
|
@ -1120,6 +1122,7 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
|
|||
|
||||
if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
|
||||
return -EINVAL;
|
||||
hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table));
|
||||
|
||||
if (hdr.in + sizeof(hdr) > len)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -44,6 +44,8 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/nsproxy.h>
|
||||
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include <rdma/rdma_user_cm.h>
|
||||
#include <rdma/ib_marshall.h>
|
||||
#include <rdma/rdma_cm.h>
|
||||
|
@ -1676,6 +1678,7 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
|
|||
|
||||
if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table))
|
||||
return -EINVAL;
|
||||
hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table));
|
||||
|
||||
if (hdr.in + sizeof(hdr) > len)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -1346,6 +1346,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
|
|||
{ "ELAN0611", 0 },
|
||||
{ "ELAN0612", 0 },
|
||||
{ "ELAN0618", 0 },
|
||||
{ "ELAN061C", 0 },
|
||||
{ "ELAN061D", 0 },
|
||||
{ "ELAN0622", 0 },
|
||||
{ "ELAN1000", 0 },
|
||||
|
|
|
@ -321,9 +321,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
|
|||
phydev->advertising = phydev->supported;
|
||||
|
||||
/* The internal PHY has its link interrupts routed to the
|
||||
* Ethernet MAC ISRs
|
||||
* Ethernet MAC ISRs. On GENETv5 there is a hardware issue
|
||||
* that prevents the signaling of link UP interrupts when
|
||||
* the link operates at 10Mbps, so fallback to polling for
|
||||
* those versions of GENET.
|
||||
*/
|
||||
if (priv->internal_phy)
|
||||
if (priv->internal_phy && !GENET_IS_V5(priv))
|
||||
dev->phydev->irq = PHY_IGNORE_INTERRUPT;
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -452,6 +452,10 @@ struct bufdesc_ex {
|
|||
* initialisation.
|
||||
*/
|
||||
#define FEC_QUIRK_MIB_CLEAR (1 << 15)
|
||||
/* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers,
|
||||
* those FIFO receive registers are resolved in other platforms.
|
||||
*/
|
||||
#define FEC_QUIRK_HAS_FRREG (1 << 16)
|
||||
|
||||
struct bufdesc_prop {
|
||||
int qid;
|
||||
|
|
|
@ -91,14 +91,16 @@ static struct platform_device_id fec_devtype[] = {
|
|||
.driver_data = 0,
|
||||
}, {
|
||||
.name = "imx25-fec",
|
||||
.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR,
|
||||
.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
|
||||
FEC_QUIRK_HAS_FRREG,
|
||||
}, {
|
||||
.name = "imx27-fec",
|
||||
.driver_data = FEC_QUIRK_MIB_CLEAR,
|
||||
.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
|
||||
}, {
|
||||
.name = "imx28-fec",
|
||||
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
|
||||
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC,
|
||||
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
|
||||
FEC_QUIRK_HAS_FRREG,
|
||||
}, {
|
||||
.name = "imx6q-fec",
|
||||
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
|
||||
|
@ -2164,7 +2166,13 @@ static void fec_enet_get_regs(struct net_device *ndev,
|
|||
memset(buf, 0, regs->len);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
|
||||
off = fec_enet_register_offset[i] / 4;
|
||||
off = fec_enet_register_offset[i];
|
||||
|
||||
if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
|
||||
!(fep->quirks & FEC_QUIRK_HAS_FRREG))
|
||||
continue;
|
||||
|
||||
off >>= 2;
|
||||
buf[off] = readl(&theregs[off]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -432,10 +432,9 @@ static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
|
|||
|
||||
static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
|
||||
struct mlx5_wq_cyc *wq,
|
||||
u16 pi, u16 frag_pi)
|
||||
u16 pi, u16 nnops)
|
||||
{
|
||||
struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi];
|
||||
u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
|
||||
|
||||
edge_wi = wi + nnops;
|
||||
|
||||
|
@ -454,15 +453,14 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
|
|||
struct mlx5_wq_cyc *wq = &sq->wq;
|
||||
struct mlx5e_umr_wqe *umr_wqe;
|
||||
u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
|
||||
u16 pi, frag_pi;
|
||||
u16 pi, contig_wqebbs_room;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
|
||||
|
||||
if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) {
|
||||
mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi);
|
||||
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
|
||||
if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
|
||||
mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room);
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
}
|
||||
|
||||
|
|
|
@ -290,10 +290,9 @@ mlx5e_txwqe_build_dsegs(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
|||
|
||||
static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
|
||||
struct mlx5_wq_cyc *wq,
|
||||
u16 pi, u16 frag_pi)
|
||||
u16 pi, u16 nnops)
|
||||
{
|
||||
struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
|
||||
u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
|
||||
|
||||
edge_wi = wi + nnops;
|
||||
|
||||
|
@ -348,8 +347,8 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
|||
struct mlx5e_tx_wqe_info *wi;
|
||||
|
||||
struct mlx5e_sq_stats *stats = sq->stats;
|
||||
u16 headlen, ihs, contig_wqebbs_room;
|
||||
u16 ds_cnt, ds_cnt_inl = 0;
|
||||
u16 headlen, ihs, frag_pi;
|
||||
u8 num_wqebbs, opcode;
|
||||
u32 num_bytes;
|
||||
int num_dma;
|
||||
|
@ -386,9 +385,9 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
|||
}
|
||||
|
||||
num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
|
||||
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
|
||||
if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
|
||||
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
|
||||
if (unlikely(contig_wqebbs_room < num_wqebbs)) {
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
|
||||
mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
|
||||
}
|
||||
|
||||
|
@ -636,7 +635,7 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
|||
struct mlx5e_tx_wqe_info *wi;
|
||||
|
||||
struct mlx5e_sq_stats *stats = sq->stats;
|
||||
u16 headlen, ihs, pi, frag_pi;
|
||||
u16 headlen, ihs, pi, contig_wqebbs_room;
|
||||
u16 ds_cnt, ds_cnt_inl = 0;
|
||||
u8 num_wqebbs, opcode;
|
||||
u32 num_bytes;
|
||||
|
@ -672,13 +671,14 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
|||
}
|
||||
|
||||
num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
|
||||
frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
|
||||
if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
|
||||
if (unlikely(contig_wqebbs_room < num_wqebbs)) {
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
|
||||
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
|
||||
}
|
||||
|
||||
mlx5i_sq_fetch_wqe(sq, &wqe, &pi);
|
||||
mlx5i_sq_fetch_wqe(sq, &wqe, pi);
|
||||
|
||||
/* fill wqe */
|
||||
wi = &sq->db.wqe_info[pi];
|
||||
|
|
|
@ -273,7 +273,7 @@ static void eq_pf_process(struct mlx5_eq *eq)
|
|||
case MLX5_PFAULT_SUBTYPE_WQE:
|
||||
/* WQE based event */
|
||||
pfault->type =
|
||||
be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24;
|
||||
(be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7;
|
||||
pfault->token =
|
||||
be32_to_cpu(pf_eqe->wqe.token);
|
||||
pfault->wqe.wq_num =
|
||||
|
|
|
@ -245,7 +245,7 @@ static void *mlx5_fpga_ipsec_cmd_exec(struct mlx5_core_dev *mdev,
|
|||
return ERR_PTR(res);
|
||||
}
|
||||
|
||||
/* Context will be freed by wait func after completion */
|
||||
/* Context should be freed by the caller after completion. */
|
||||
return context;
|
||||
}
|
||||
|
||||
|
@ -418,10 +418,8 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
|
|||
cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP);
|
||||
cmd.flags = htonl(flags);
|
||||
context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd));
|
||||
if (IS_ERR(context)) {
|
||||
err = PTR_ERR(context);
|
||||
goto out;
|
||||
}
|
||||
if (IS_ERR(context))
|
||||
return PTR_ERR(context);
|
||||
|
||||
err = mlx5_fpga_ipsec_cmd_wait(context);
|
||||
if (err)
|
||||
|
@ -435,6 +433,7 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
|
|||
}
|
||||
|
||||
out:
|
||||
kfree(context);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -109,12 +109,11 @@ struct mlx5i_tx_wqe {
|
|||
|
||||
static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq,
|
||||
struct mlx5i_tx_wqe **wqe,
|
||||
u16 *pi)
|
||||
u16 pi)
|
||||
{
|
||||
struct mlx5_wq_cyc *wq = &sq->wq;
|
||||
|
||||
*pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
|
||||
*wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
|
||||
*wqe = mlx5_wq_cyc_get_wqe(wq, pi);
|
||||
memset(*wqe, 0, sizeof(**wqe));
|
||||
}
|
||||
|
||||
|
|
|
@ -39,11 +39,6 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
|
|||
return (u32)wq->fbc.sz_m1 + 1;
|
||||
}
|
||||
|
||||
u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
|
||||
{
|
||||
return wq->fbc.frag_sz_m1 + 1;
|
||||
}
|
||||
|
||||
u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
|
||||
{
|
||||
return wq->fbc.sz_m1 + 1;
|
||||
|
|
|
@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
|
|||
void *wqc, struct mlx5_wq_cyc *wq,
|
||||
struct mlx5_wq_ctrl *wq_ctrl);
|
||||
u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
|
||||
u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
|
||||
|
||||
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
|
||||
void *qpc, struct mlx5_wq_qp *wq,
|
||||
|
@ -140,11 +139,6 @@ static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
|
|||
return ctr & wq->fbc.sz_m1;
|
||||
}
|
||||
|
||||
static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr)
|
||||
{
|
||||
return ctr & wq->fbc.frag_sz_m1;
|
||||
}
|
||||
|
||||
static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq)
|
||||
{
|
||||
return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
|
||||
|
@ -160,6 +154,11 @@ static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix)
|
|||
return mlx5_frag_buf_get_wqe(&wq->fbc, ix);
|
||||
}
|
||||
|
||||
static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix)
|
||||
{
|
||||
return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1;
|
||||
}
|
||||
|
||||
static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
|
||||
{
|
||||
int equal = (cc1 == cc2);
|
||||
|
|
|
@ -1055,6 +1055,7 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
|
|||
err_driver_init:
|
||||
mlxsw_thermal_fini(mlxsw_core->thermal);
|
||||
err_thermal_init:
|
||||
mlxsw_hwmon_fini(mlxsw_core->hwmon);
|
||||
err_hwmon_init:
|
||||
if (!reload)
|
||||
devlink_unregister(devlink);
|
||||
|
@ -1088,6 +1089,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
|
|||
if (mlxsw_core->driver->fini)
|
||||
mlxsw_core->driver->fini(mlxsw_core);
|
||||
mlxsw_thermal_fini(mlxsw_core->thermal);
|
||||
mlxsw_hwmon_fini(mlxsw_core->hwmon);
|
||||
if (!reload)
|
||||
devlink_unregister(devlink);
|
||||
mlxsw_emad_fini(mlxsw_core);
|
||||
|
|
|
@ -359,6 +359,10 @@ static inline int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
struct mlxsw_thermal;
|
||||
|
|
|
@ -303,8 +303,7 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
|
|||
struct device *hwmon_dev;
|
||||
int err;
|
||||
|
||||
mlxsw_hwmon = devm_kzalloc(mlxsw_bus_info->dev, sizeof(*mlxsw_hwmon),
|
||||
GFP_KERNEL);
|
||||
mlxsw_hwmon = kzalloc(sizeof(*mlxsw_hwmon), GFP_KERNEL);
|
||||
if (!mlxsw_hwmon)
|
||||
return -ENOMEM;
|
||||
mlxsw_hwmon->core = mlxsw_core;
|
||||
|
@ -321,10 +320,9 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
|
|||
mlxsw_hwmon->groups[0] = &mlxsw_hwmon->group;
|
||||
mlxsw_hwmon->group.attrs = mlxsw_hwmon->attrs;
|
||||
|
||||
hwmon_dev = devm_hwmon_device_register_with_groups(mlxsw_bus_info->dev,
|
||||
"mlxsw",
|
||||
mlxsw_hwmon,
|
||||
mlxsw_hwmon->groups);
|
||||
hwmon_dev = hwmon_device_register_with_groups(mlxsw_bus_info->dev,
|
||||
"mlxsw", mlxsw_hwmon,
|
||||
mlxsw_hwmon->groups);
|
||||
if (IS_ERR(hwmon_dev)) {
|
||||
err = PTR_ERR(hwmon_dev);
|
||||
goto err_hwmon_register;
|
||||
|
@ -337,5 +335,12 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
|
|||
err_hwmon_register:
|
||||
err_fans_init:
|
||||
err_temp_init:
|
||||
kfree(mlxsw_hwmon);
|
||||
return err;
|
||||
}
|
||||
|
||||
void mlxsw_hwmon_fini(struct mlxsw_hwmon *mlxsw_hwmon)
|
||||
{
|
||||
hwmon_device_unregister(mlxsw_hwmon->hwmon_dev);
|
||||
kfree(mlxsw_hwmon);
|
||||
}
|
||||
|
|
|
@ -133,9 +133,9 @@ static inline int ocelot_vlant_wait_for_completion(struct ocelot *ocelot)
|
|||
{
|
||||
unsigned int val, timeout = 10;
|
||||
|
||||
/* Wait for the issued mac table command to be completed, or timeout.
|
||||
* When the command read from ANA_TABLES_MACACCESS is
|
||||
* MACACCESS_CMD_IDLE, the issued command completed successfully.
|
||||
/* Wait for the issued vlan table command to be completed, or timeout.
|
||||
* When the command read from ANA_TABLES_VLANACCESS is
|
||||
* VLANACCESS_CMD_IDLE, the issued command completed successfully.
|
||||
*/
|
||||
do {
|
||||
val = ocelot_read(ocelot, ANA_TABLES_VLANACCESS);
|
||||
|
|
|
@ -429,12 +429,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
|
|||
|
||||
switch (off) {
|
||||
case offsetof(struct iphdr, daddr):
|
||||
set_ip_addr->ipv4_dst_mask = mask;
|
||||
set_ip_addr->ipv4_dst = exact;
|
||||
set_ip_addr->ipv4_dst_mask |= mask;
|
||||
set_ip_addr->ipv4_dst &= ~mask;
|
||||
set_ip_addr->ipv4_dst |= exact & mask;
|
||||
break;
|
||||
case offsetof(struct iphdr, saddr):
|
||||
set_ip_addr->ipv4_src_mask = mask;
|
||||
set_ip_addr->ipv4_src = exact;
|
||||
set_ip_addr->ipv4_src_mask |= mask;
|
||||
set_ip_addr->ipv4_src &= ~mask;
|
||||
set_ip_addr->ipv4_src |= exact & mask;
|
||||
break;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
|
@ -448,11 +450,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
|
|||
}
|
||||
|
||||
static void
|
||||
nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
|
||||
nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
|
||||
struct nfp_fl_set_ipv6_addr *ip6)
|
||||
{
|
||||
ip6->ipv6[idx % 4].mask = mask;
|
||||
ip6->ipv6[idx % 4].exact = exact;
|
||||
ip6->ipv6[word].mask |= mask;
|
||||
ip6->ipv6[word].exact &= ~mask;
|
||||
ip6->ipv6[word].exact |= exact & mask;
|
||||
|
||||
ip6->reserved = cpu_to_be16(0);
|
||||
ip6->head.jump_id = opcode_tag;
|
||||
|
@ -465,6 +468,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
|
|||
struct nfp_fl_set_ipv6_addr *ip_src)
|
||||
{
|
||||
__be32 exact, mask;
|
||||
u8 word;
|
||||
|
||||
/* We are expecting tcf_pedit to return a big endian value */
|
||||
mask = (__force __be32)~tcf_pedit_mask(action, idx);
|
||||
|
@ -473,17 +477,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
|
|||
if (exact & ~mask)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (off < offsetof(struct ipv6hdr, saddr))
|
||||
if (off < offsetof(struct ipv6hdr, saddr)) {
|
||||
return -EOPNOTSUPP;
|
||||
else if (off < offsetof(struct ipv6hdr, daddr))
|
||||
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
|
||||
} else if (off < offsetof(struct ipv6hdr, daddr)) {
|
||||
word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
|
||||
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
|
||||
exact, mask, ip_src);
|
||||
else if (off < offsetof(struct ipv6hdr, daddr) +
|
||||
sizeof(struct in6_addr))
|
||||
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
|
||||
} else if (off < offsetof(struct ipv6hdr, daddr) +
|
||||
sizeof(struct in6_addr)) {
|
||||
word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
|
||||
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
|
||||
exact, mask, ip_dst);
|
||||
else
|
||||
} else {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -541,7 +548,7 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
|
|||
struct nfp_fl_set_eth set_eth;
|
||||
enum pedit_header_type htype;
|
||||
int idx, nkeys, err;
|
||||
size_t act_size;
|
||||
size_t act_size = 0;
|
||||
u32 offset, cmd;
|
||||
u8 ip_proto = 0;
|
||||
|
||||
|
@ -599,7 +606,9 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
|
|||
act_size = sizeof(set_eth);
|
||||
memcpy(nfp_action, &set_eth, act_size);
|
||||
*a_len += act_size;
|
||||
} else if (set_ip_addr.head.len_lw) {
|
||||
}
|
||||
if (set_ip_addr.head.len_lw) {
|
||||
nfp_action += act_size;
|
||||
act_size = sizeof(set_ip_addr);
|
||||
memcpy(nfp_action, &set_ip_addr, act_size);
|
||||
*a_len += act_size;
|
||||
|
@ -607,10 +616,12 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
|
|||
/* Hardware will automatically fix IPv4 and TCP/UDP checksum. */
|
||||
*csum_updated |= TCA_CSUM_UPDATE_FLAG_IPV4HDR |
|
||||
nfp_fl_csum_l4_to_flag(ip_proto);
|
||||
} else if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) {
|
||||
}
|
||||
if (set_ip6_dst.head.len_lw && set_ip6_src.head.len_lw) {
|
||||
/* TC compiles set src and dst IPv6 address as a single action,
|
||||
* the hardware requires this to be 2 separate actions.
|
||||
*/
|
||||
nfp_action += act_size;
|
||||
act_size = sizeof(set_ip6_src);
|
||||
memcpy(nfp_action, &set_ip6_src, act_size);
|
||||
*a_len += act_size;
|
||||
|
@ -623,6 +634,7 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
|
|||
/* Hardware will automatically fix TCP/UDP checksum. */
|
||||
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
|
||||
} else if (set_ip6_dst.head.len_lw) {
|
||||
nfp_action += act_size;
|
||||
act_size = sizeof(set_ip6_dst);
|
||||
memcpy(nfp_action, &set_ip6_dst, act_size);
|
||||
*a_len += act_size;
|
||||
|
@ -630,13 +642,16 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
|
|||
/* Hardware will automatically fix TCP/UDP checksum. */
|
||||
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
|
||||
} else if (set_ip6_src.head.len_lw) {
|
||||
nfp_action += act_size;
|
||||
act_size = sizeof(set_ip6_src);
|
||||
memcpy(nfp_action, &set_ip6_src, act_size);
|
||||
*a_len += act_size;
|
||||
|
||||
/* Hardware will automatically fix TCP/UDP checksum. */
|
||||
*csum_updated |= nfp_fl_csum_l4_to_flag(ip_proto);
|
||||
} else if (set_tport.head.len_lw) {
|
||||
}
|
||||
if (set_tport.head.len_lw) {
|
||||
nfp_action += act_size;
|
||||
act_size = sizeof(set_tport);
|
||||
memcpy(nfp_action, &set_tport, act_size);
|
||||
*a_len += act_size;
|
||||
|
|
|
@ -228,7 +228,7 @@ static int qed_grc_attn_cb(struct qed_hwfn *p_hwfn)
|
|||
attn_master_to_str(GET_FIELD(tmp, QED_GRC_ATTENTION_MASTER)),
|
||||
GET_FIELD(tmp2, QED_GRC_ATTENTION_PF),
|
||||
(GET_FIELD(tmp2, QED_GRC_ATTENTION_PRIV) ==
|
||||
QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Ireelevant)",
|
||||
QED_GRC_ATTENTION_PRIV_VF) ? "VF" : "(Irrelevant)",
|
||||
GET_FIELD(tmp2, QED_GRC_ATTENTION_VF));
|
||||
|
||||
out:
|
||||
|
|
|
@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
|
|||
|
||||
qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
|
||||
ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
|
||||
ql_write_nvram_reg(qdev, spir,
|
||||
((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -6549,17 +6549,15 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
|
|||
struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
|
||||
struct net_device *dev = tp->dev;
|
||||
u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
|
||||
int work_done= 0;
|
||||
int work_done;
|
||||
u16 status;
|
||||
|
||||
status = rtl_get_events(tp);
|
||||
rtl_ack_events(tp, status & ~tp->event_slow);
|
||||
|
||||
if (status & RTL_EVENT_NAPI_RX)
|
||||
work_done = rtl_rx(dev, tp, (u32) budget);
|
||||
work_done = rtl_rx(dev, tp, (u32) budget);
|
||||
|
||||
if (status & RTL_EVENT_NAPI_TX)
|
||||
rtl_tx(dev, tp);
|
||||
rtl_tx(dev, tp);
|
||||
|
||||
if (status & tp->event_slow) {
|
||||
enable_mask &= ~tp->event_slow;
|
||||
|
@ -7093,20 +7091,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
|
|||
{
|
||||
unsigned int flags;
|
||||
|
||||
switch (tp->mac_version) {
|
||||
case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
|
||||
if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
|
||||
RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
|
||||
RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
|
||||
RTL_W8(tp, Cfg9346, Cfg9346_Lock);
|
||||
flags = PCI_IRQ_LEGACY;
|
||||
break;
|
||||
case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
|
||||
/* This version was reported to have issues with resume
|
||||
* from suspend when using MSI-X
|
||||
*/
|
||||
flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
|
||||
break;
|
||||
default:
|
||||
} else {
|
||||
flags = PCI_IRQ_ALL_TYPES;
|
||||
}
|
||||
|
||||
|
|
|
@ -830,12 +830,8 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
if (IS_ERR(rt))
|
||||
return PTR_ERR(rt);
|
||||
|
||||
if (skb_dst(skb)) {
|
||||
int mtu = dst_mtu(&rt->dst) - GENEVE_IPV4_HLEN -
|
||||
info->options_len;
|
||||
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
skb_tunnel_check_pmtu(skb, &rt->dst,
|
||||
GENEVE_IPV4_HLEN + info->options_len);
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
if (geneve->collect_md) {
|
||||
|
@ -876,11 +872,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
if (IS_ERR(dst))
|
||||
return PTR_ERR(dst);
|
||||
|
||||
if (skb_dst(skb)) {
|
||||
int mtu = dst_mtu(dst) - GENEVE_IPV6_HLEN - info->options_len;
|
||||
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
skb_tunnel_check_pmtu(skb, dst, GENEVE_IPV6_HLEN + info->options_len);
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
if (geneve->collect_md) {
|
||||
|
|
|
@ -2218,8 +2218,9 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
|
|||
/* Make sure no work handler is accessing the device */
|
||||
flush_work(&vi->config_work);
|
||||
|
||||
netif_tx_lock_bh(vi->dev);
|
||||
netif_device_detach(vi->dev);
|
||||
netif_tx_disable(vi->dev);
|
||||
netif_tx_unlock_bh(vi->dev);
|
||||
cancel_delayed_work_sync(&vi->refill);
|
||||
|
||||
if (netif_running(vi->dev)) {
|
||||
|
@ -2255,7 +2256,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
|
|||
}
|
||||
}
|
||||
|
||||
netif_tx_lock_bh(vi->dev);
|
||||
netif_device_attach(vi->dev);
|
||||
netif_tx_unlock_bh(vi->dev);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -2194,11 +2194,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
}
|
||||
|
||||
ndst = &rt->dst;
|
||||
if (skb_dst(skb)) {
|
||||
int mtu = dst_mtu(ndst) - VXLAN_HEADROOM;
|
||||
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
|
||||
|
||||
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
|
||||
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
|
||||
|
@ -2235,11 +2231,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (skb_dst(skb)) {
|
||||
int mtu = dst_mtu(ndst) - VXLAN6_HEADROOM;
|
||||
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
|
||||
|
||||
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
|
||||
ttl = ttl ? : ip6_dst_hoplimit(ndst);
|
||||
|
|
|
@ -3143,8 +3143,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
|
|||
}
|
||||
|
||||
mutex_lock(&ns->ctrl->subsys->lock);
|
||||
nvme_mpath_clear_current_path(ns);
|
||||
list_del_rcu(&ns->siblings);
|
||||
nvme_mpath_clear_current_path(ns);
|
||||
mutex_unlock(&ns->ctrl->subsys->lock);
|
||||
|
||||
down_write(&ns->ctrl->namespaces_rwsem);
|
||||
|
|
|
@ -24,6 +24,8 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/timekeeping.h>
|
||||
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include "ptp_private.h"
|
||||
|
||||
static int ptp_disable_pinfunc(struct ptp_clock_info *ops,
|
||||
|
@ -248,6 +250,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
|
|||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
pin_index = array_index_nospec(pin_index, ops->n_pins);
|
||||
if (mutex_lock_interruptible(&ptp->pincfg_mux))
|
||||
return -ERESTARTSYS;
|
||||
pd = ops->pin_config[pin_index];
|
||||
|
@ -266,6 +269,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
|
|||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
pin_index = array_index_nospec(pin_index, ops->n_pins);
|
||||
if (mutex_lock_interruptible(&ptp->pincfg_mux))
|
||||
return -ERESTARTSYS;
|
||||
err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);
|
||||
|
|
|
@ -310,17 +310,17 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
|
|||
|
||||
if (difference & ACM_CTRL_DSR)
|
||||
acm->iocount.dsr++;
|
||||
if (difference & ACM_CTRL_BRK)
|
||||
acm->iocount.brk++;
|
||||
if (difference & ACM_CTRL_RI)
|
||||
acm->iocount.rng++;
|
||||
if (difference & ACM_CTRL_DCD)
|
||||
acm->iocount.dcd++;
|
||||
if (difference & ACM_CTRL_FRAMING)
|
||||
if (newctrl & ACM_CTRL_BRK)
|
||||
acm->iocount.brk++;
|
||||
if (newctrl & ACM_CTRL_RI)
|
||||
acm->iocount.rng++;
|
||||
if (newctrl & ACM_CTRL_FRAMING)
|
||||
acm->iocount.frame++;
|
||||
if (difference & ACM_CTRL_PARITY)
|
||||
if (newctrl & ACM_CTRL_PARITY)
|
||||
acm->iocount.parity++;
|
||||
if (difference & ACM_CTRL_OVERRUN)
|
||||
if (newctrl & ACM_CTRL_OVERRUN)
|
||||
acm->iocount.overrun++;
|
||||
spin_unlock_irqrestore(&acm->read_lock, flags);
|
||||
|
||||
|
@ -355,7 +355,6 @@ static void acm_ctrl_irq(struct urb *urb)
|
|||
case -ENOENT:
|
||||
case -ESHUTDOWN:
|
||||
/* this urb is terminated, clean up */
|
||||
acm->nb_index = 0;
|
||||
dev_dbg(&acm->control->dev,
|
||||
"%s - urb shutting down with status: %d\n",
|
||||
__func__, status);
|
||||
|
@ -1642,6 +1641,7 @@ static int acm_pre_reset(struct usb_interface *intf)
|
|||
struct acm *acm = usb_get_intfdata(intf);
|
||||
|
||||
clear_bit(EVENT_RX_STALL, &acm->flags);
|
||||
acm->nb_index = 0; /* pending control transfers are lost */
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1474,8 +1474,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
|||
u = 0;
|
||||
switch (uurb->type) {
|
||||
case USBDEVFS_URB_TYPE_CONTROL:
|
||||
if (is_in)
|
||||
allow_short = true;
|
||||
if (!usb_endpoint_xfer_control(&ep->desc))
|
||||
return -EINVAL;
|
||||
/* min 8 byte setup packet */
|
||||
|
@ -1505,6 +1503,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
|||
is_in = 0;
|
||||
uurb->endpoint &= ~USB_DIR_IN;
|
||||
}
|
||||
if (is_in)
|
||||
allow_short = true;
|
||||
snoop(&ps->dev->dev, "control urb: bRequestType=%02x "
|
||||
"bRequest=%02x wValue=%04x "
|
||||
"wIndex=%04x wLength=%04x\n",
|
||||
|
|
|
@ -221,6 +221,8 @@
|
|||
#include <linux/usb/gadget.h>
|
||||
#include <linux/usb/composite.h>
|
||||
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include "configfs.h"
|
||||
|
||||
|
||||
|
@ -3158,6 +3160,7 @@ static struct config_group *fsg_lun_make(struct config_group *group,
|
|||
fsg_opts = to_fsg_opts(&group->cg_item);
|
||||
if (num >= FSG_MAX_LUNS)
|
||||
return ERR_PTR(-ERANGE);
|
||||
num = array_index_nospec(num, FSG_MAX_LUNS);
|
||||
|
||||
mutex_lock(&fsg_opts->lock);
|
||||
if (fsg_opts->refcnt || fsg_opts->common->luns[num]) {
|
||||
|
|
|
@ -179,10 +179,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
|
|||
xhci->quirks |= XHCI_PME_STUCK_QUIRK;
|
||||
}
|
||||
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
|
||||
pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
|
||||
pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)
|
||||
xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
|
||||
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
|
||||
(pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
|
||||
pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
|
||||
xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
|
||||
}
|
||||
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
|
||||
(pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
|
||||
pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
|
||||
|
|
|
@ -161,6 +161,8 @@ static int intel_xhci_usb_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct intel_xhci_usb_data *data = platform_get_drvdata(pdev);
|
||||
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
|
||||
usb_role_switch_unregister(data->role_sw);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -318,8 +318,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
struct vhci_hcd *vhci_hcd;
|
||||
struct vhci *vhci;
|
||||
int retval = 0;
|
||||
int rhport;
|
||||
int rhport = -1;
|
||||
unsigned long flags;
|
||||
bool invalid_rhport = false;
|
||||
|
||||
u32 prev_port_status[VHCI_HC_PORTS];
|
||||
|
||||
|
@ -334,9 +335,19 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
usbip_dbg_vhci_rh("typeReq %x wValue %x wIndex %x\n", typeReq, wValue,
|
||||
wIndex);
|
||||
|
||||
if (wIndex > VHCI_HC_PORTS)
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
rhport = wIndex - 1;
|
||||
/*
|
||||
* wIndex can be 0 for some request types (typeReq). rhport is
|
||||
* in valid range when wIndex >= 1 and < VHCI_HC_PORTS.
|
||||
*
|
||||
* Reference port_status[] only with valid rhport when
|
||||
* invalid_rhport is false.
|
||||
*/
|
||||
if (wIndex < 1 || wIndex > VHCI_HC_PORTS) {
|
||||
invalid_rhport = true;
|
||||
if (wIndex > VHCI_HC_PORTS)
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
} else
|
||||
rhport = wIndex - 1;
|
||||
|
||||
vhci_hcd = hcd_to_vhci_hcd(hcd);
|
||||
vhci = vhci_hcd->vhci;
|
||||
|
@ -345,8 +356,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
|
||||
/* store old status and compare now and old later */
|
||||
if (usbip_dbg_flag_vhci_rh) {
|
||||
memcpy(prev_port_status, vhci_hcd->port_status,
|
||||
sizeof(prev_port_status));
|
||||
if (!invalid_rhport)
|
||||
memcpy(prev_port_status, vhci_hcd->port_status,
|
||||
sizeof(prev_port_status));
|
||||
}
|
||||
|
||||
switch (typeReq) {
|
||||
|
@ -354,8 +366,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
usbip_dbg_vhci_rh(" ClearHubFeature\n");
|
||||
break;
|
||||
case ClearPortFeature:
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
switch (wValue) {
|
||||
case USB_PORT_FEAT_SUSPEND:
|
||||
if (hcd->speed == HCD_USB3) {
|
||||
|
@ -415,9 +429,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
break;
|
||||
case GetPortStatus:
|
||||
usbip_dbg_vhci_rh(" GetPortStatus port %x\n", wIndex);
|
||||
if (wIndex < 1) {
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
retval = -EPIPE;
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* we do not care about resume. */
|
||||
|
@ -513,16 +528,20 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
goto error;
|
||||
}
|
||||
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
|
||||
vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND;
|
||||
break;
|
||||
case USB_PORT_FEAT_POWER:
|
||||
usbip_dbg_vhci_rh(
|
||||
" SetPortFeature: USB_PORT_FEAT_POWER\n");
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
if (hcd->speed == HCD_USB3)
|
||||
vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER;
|
||||
else
|
||||
|
@ -531,8 +550,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
case USB_PORT_FEAT_BH_PORT_RESET:
|
||||
usbip_dbg_vhci_rh(
|
||||
" SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n");
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
/* Applicable only for USB3.0 hub */
|
||||
if (hcd->speed != HCD_USB3) {
|
||||
pr_err("USB_PORT_FEAT_BH_PORT_RESET req not "
|
||||
|
@ -543,8 +564,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
case USB_PORT_FEAT_RESET:
|
||||
usbip_dbg_vhci_rh(
|
||||
" SetPortFeature: USB_PORT_FEAT_RESET\n");
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
/* if it's already enabled, disable */
|
||||
if (hcd->speed == HCD_USB3) {
|
||||
vhci_hcd->port_status[rhport] = 0;
|
||||
|
@ -565,8 +588,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
default:
|
||||
usbip_dbg_vhci_rh(" SetPortFeature: default %d\n",
|
||||
wValue);
|
||||
if (rhport < 0)
|
||||
if (invalid_rhport) {
|
||||
pr_err("invalid port number %d\n", wIndex);
|
||||
goto error;
|
||||
}
|
||||
if (hcd->speed == HCD_USB3) {
|
||||
if ((vhci_hcd->port_status[rhport] &
|
||||
USB_SS_PORT_STAT_POWER) != 0) {
|
||||
|
@ -608,7 +633,7 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
if (usbip_dbg_flag_vhci_rh) {
|
||||
pr_debug("port %d\n", rhport);
|
||||
/* Only dump valid port status */
|
||||
if (rhport >= 0) {
|
||||
if (!invalid_rhport) {
|
||||
dump_port_status_diff(prev_port_status[rhport],
|
||||
vhci_hcd->port_status[rhport],
|
||||
hcd->speed == HCD_USB3);
|
||||
|
@ -618,8 +643,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
|||
|
||||
spin_unlock_irqrestore(&vhci->lock, flags);
|
||||
|
||||
if ((vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0)
|
||||
if (!invalid_rhport &&
|
||||
(vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0) {
|
||||
usb_hcd_poll_rh_status(hcd);
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
|
|
@ -690,8 +690,6 @@ static void afs_process_async_call(struct work_struct *work)
|
|||
}
|
||||
|
||||
if (call->state == AFS_CALL_COMPLETE) {
|
||||
call->reply[0] = NULL;
|
||||
|
||||
/* We have two refs to release - one from the alloc and one
|
||||
* queued with the work item - and we can't just deallocate the
|
||||
* call because the work item may be queued again.
|
||||
|
|
|
@ -199,11 +199,9 @@ static struct afs_server *afs_install_server(struct afs_net *net,
|
|||
|
||||
write_sequnlock(&net->fs_addr_lock);
|
||||
ret = 0;
|
||||
goto out;
|
||||
|
||||
exists:
|
||||
afs_get_server(server);
|
||||
out:
|
||||
write_sequnlock(&net->fs_lock);
|
||||
return server;
|
||||
}
|
||||
|
|
|
@ -343,7 +343,7 @@ static int cachefiles_bury_object(struct cachefiles_cache *cache,
|
|||
trap = lock_rename(cache->graveyard, dir);
|
||||
|
||||
/* do some checks before getting the grave dentry */
|
||||
if (rep->d_parent != dir) {
|
||||
if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
|
||||
/* the entry was probably culled when we dropped the parent dir
|
||||
* lock */
|
||||
unlock_rename(cache->graveyard, dir);
|
||||
|
|
|
@ -70,20 +70,7 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
|
|||
}
|
||||
|
||||
/*
|
||||
* initialise an cookie jar slab element prior to any use
|
||||
*/
|
||||
void fscache_cookie_init_once(void *_cookie)
|
||||
{
|
||||
struct fscache_cookie *cookie = _cookie;
|
||||
|
||||
memset(cookie, 0, sizeof(*cookie));
|
||||
spin_lock_init(&cookie->lock);
|
||||
spin_lock_init(&cookie->stores_lock);
|
||||
INIT_HLIST_HEAD(&cookie->backing_objects);
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the index key in a cookie. The cookie struct has space for a 12-byte
|
||||
* Set the index key in a cookie. The cookie struct has space for a 16-byte
|
||||
* key plus length and hash, but if that's not big enough, it's instead a
|
||||
* pointer to a buffer containing 3 bytes of hash, 1 byte of length and then
|
||||
* the key data.
|
||||
|
@ -93,20 +80,18 @@ static int fscache_set_key(struct fscache_cookie *cookie,
|
|||
{
|
||||
unsigned long long h;
|
||||
u32 *buf;
|
||||
int bufs;
|
||||
int i;
|
||||
|
||||
cookie->key_len = index_key_len;
|
||||
bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
|
||||
|
||||
if (index_key_len > sizeof(cookie->inline_key)) {
|
||||
buf = kzalloc(index_key_len, GFP_KERNEL);
|
||||
buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
cookie->key = buf;
|
||||
} else {
|
||||
buf = (u32 *)cookie->inline_key;
|
||||
buf[0] = 0;
|
||||
buf[1] = 0;
|
||||
buf[2] = 0;
|
||||
}
|
||||
|
||||
memcpy(buf, index_key, index_key_len);
|
||||
|
@ -116,7 +101,8 @@ static int fscache_set_key(struct fscache_cookie *cookie,
|
|||
*/
|
||||
h = (unsigned long)cookie->parent;
|
||||
h += index_key_len + cookie->type;
|
||||
for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++)
|
||||
|
||||
for (i = 0; i < bufs; i++)
|
||||
h += buf[i];
|
||||
|
||||
cookie->key_hash = h ^ (h >> 32);
|
||||
|
@ -161,7 +147,7 @@ struct fscache_cookie *fscache_alloc_cookie(
|
|||
struct fscache_cookie *cookie;
|
||||
|
||||
/* allocate and initialise a cookie */
|
||||
cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL);
|
||||
cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
|
||||
if (!cookie)
|
||||
return NULL;
|
||||
|
||||
|
@ -192,6 +178,9 @@ struct fscache_cookie *fscache_alloc_cookie(
|
|||
cookie->netfs_data = netfs_data;
|
||||
cookie->flags = (1 << FSCACHE_COOKIE_NO_DATA_YET);
|
||||
cookie->type = def->type;
|
||||
spin_lock_init(&cookie->lock);
|
||||
spin_lock_init(&cookie->stores_lock);
|
||||
INIT_HLIST_HEAD(&cookie->backing_objects);
|
||||
|
||||
/* radix tree insertion won't use the preallocation pool unless it's
|
||||
* told it may not wait */
|
||||
|
|
|
@ -51,7 +51,6 @@ extern struct fscache_cache *fscache_select_cache_for_object(
|
|||
extern struct kmem_cache *fscache_cookie_jar;
|
||||
|
||||
extern void fscache_free_cookie(struct fscache_cookie *);
|
||||
extern void fscache_cookie_init_once(void *);
|
||||
extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *,
|
||||
const struct fscache_cookie_def *,
|
||||
const void *, size_t,
|
||||
|
|
|
@ -143,9 +143,7 @@ static int __init fscache_init(void)
|
|||
|
||||
fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
|
||||
sizeof(struct fscache_cookie),
|
||||
0,
|
||||
0,
|
||||
fscache_cookie_init_once);
|
||||
0, 0, NULL);
|
||||
if (!fscache_cookie_jar) {
|
||||
pr_notice("Failed to allocate a cookie jar\n");
|
||||
ret = -ENOMEM;
|
||||
|
|
|
@ -153,6 +153,17 @@ struct __drm_planes_state {
|
|||
struct __drm_crtcs_state {
|
||||
struct drm_crtc *ptr;
|
||||
struct drm_crtc_state *state, *old_state, *new_state;
|
||||
|
||||
/**
|
||||
* @commit:
|
||||
*
|
||||
* A reference to the CRTC commit object that is kept for use by
|
||||
* drm_atomic_helper_wait_for_flip_done() after
|
||||
* drm_atomic_helper_commit_hw_done() is called. This ensures that a
|
||||
* concurrent commit won't free a commit object that is still in use.
|
||||
*/
|
||||
struct drm_crtc_commit *commit;
|
||||
|
||||
s32 __user *out_fence_ptr;
|
||||
u64 last_vblank_count;
|
||||
};
|
||||
|
|
|
@ -214,9 +214,9 @@ struct detailed_timing {
|
|||
#define DRM_EDID_HDMI_DC_Y444 (1 << 3)
|
||||
|
||||
/* YCBCR 420 deep color modes */
|
||||
#define DRM_EDID_YCBCR420_DC_48 (1 << 6)
|
||||
#define DRM_EDID_YCBCR420_DC_36 (1 << 5)
|
||||
#define DRM_EDID_YCBCR420_DC_30 (1 << 4)
|
||||
#define DRM_EDID_YCBCR420_DC_48 (1 << 2)
|
||||
#define DRM_EDID_YCBCR420_DC_36 (1 << 1)
|
||||
#define DRM_EDID_YCBCR420_DC_30 (1 << 0)
|
||||
#define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \
|
||||
DRM_EDID_YCBCR420_DC_36 | \
|
||||
DRM_EDID_YCBCR420_DC_30)
|
||||
|
|
|
@ -43,7 +43,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
|
|||
unsigned char *vec);
|
||||
extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
|
||||
unsigned long new_addr, unsigned long old_end,
|
||||
pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush);
|
||||
pmd_t *old_pmd, pmd_t *new_pmd);
|
||||
extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
|
||||
unsigned long addr, pgprot_t newprot,
|
||||
int prot_numa);
|
||||
|
|
|
@ -1032,6 +1032,14 @@ static inline void *mlx5_frag_buf_get_wqe(struct mlx5_frag_buf_ctrl *fbc,
|
|||
((fbc->frag_sz_m1 & ix) << fbc->log_stride);
|
||||
}
|
||||
|
||||
static inline u32
|
||||
mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
|
||||
{
|
||||
u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
|
||||
|
||||
return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
|
||||
}
|
||||
|
||||
int mlx5_cmd_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
|
||||
void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#include <linux/rbtree_latch.h>
|
||||
#include <linux/error-injection.h>
|
||||
#include <linux/cfi.h>
|
||||
#include <linux/tracepoint-defs.h>
|
||||
|
||||
#include <linux/percpu.h>
|
||||
#include <asm/module.h>
|
||||
|
@ -435,7 +436,7 @@ struct module {
|
|||
|
||||
#ifdef CONFIG_TRACEPOINTS
|
||||
unsigned int num_tracepoints;
|
||||
struct tracepoint * const *tracepoints_ptrs;
|
||||
tracepoint_ptr_t *tracepoints_ptrs;
|
||||
#endif
|
||||
#ifdef HAVE_JUMP_LABEL
|
||||
struct jump_entry *jump_entries;
|
||||
|
|
|
@ -35,6 +35,12 @@ struct tracepoint {
|
|||
struct tracepoint_func __rcu *funcs;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
|
||||
typedef const int tracepoint_ptr_t;
|
||||
#else
|
||||
typedef struct tracepoint * const tracepoint_ptr_t;
|
||||
#endif
|
||||
|
||||
struct bpf_raw_event_map {
|
||||
struct tracepoint *tp;
|
||||
void *bpf_func;
|
||||
|
|
|
@ -99,6 +99,29 @@ extern void syscall_unregfunc(void);
|
|||
#define TRACE_DEFINE_ENUM(x)
|
||||
#define TRACE_DEFINE_SIZEOF(x)
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
|
||||
static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
||||
{
|
||||
return offset_to_ptr(p);
|
||||
}
|
||||
|
||||
#define __TRACEPOINT_ENTRY(name) \
|
||||
asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \
|
||||
" .balign 4 \n" \
|
||||
" .long __tracepoint_" #name " - . \n" \
|
||||
" .previous \n")
|
||||
#else
|
||||
static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
||||
{
|
||||
return *p;
|
||||
}
|
||||
|
||||
#define __TRACEPOINT_ENTRY(name) \
|
||||
static tracepoint_ptr_t __tracepoint_ptr_##name __used \
|
||||
__attribute__((section("__tracepoints_ptrs"))) = \
|
||||
&__tracepoint_##name
|
||||
#endif
|
||||
|
||||
#endif /* _LINUX_TRACEPOINT_H */
|
||||
|
||||
/*
|
||||
|
@ -253,19 +276,6 @@ extern void syscall_unregfunc(void);
|
|||
return static_key_false(&__tracepoint_##name.key); \
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
|
||||
#define __TRACEPOINT_ENTRY(name) \
|
||||
asm(" .section \"__tracepoints_ptrs\", \"a\" \n" \
|
||||
" .balign 4 \n" \
|
||||
" .long __tracepoint_" #name " - . \n" \
|
||||
" .previous \n")
|
||||
#else
|
||||
#define __TRACEPOINT_ENTRY(name) \
|
||||
static struct tracepoint * const __tracepoint_ptr_##name __used \
|
||||
__attribute__((section("__tracepoints_ptrs"))) = \
|
||||
&__tracepoint_##name
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We have no guarantee that gcc and the linker won't up-align the tracepoint
|
||||
* structures, so we create an array of pointers that will be used for iteration
|
||||
|
|
|
@ -527,4 +527,14 @@ static inline void skb_dst_update_pmtu(struct sk_buff *skb, u32 mtu)
|
|||
dst->ops->update_pmtu(dst, NULL, skb, mtu);
|
||||
}
|
||||
|
||||
static inline void skb_tunnel_check_pmtu(struct sk_buff *skb,
|
||||
struct dst_entry *encap_dst,
|
||||
int headroom)
|
||||
{
|
||||
u32 encap_mtu = dst_mtu(encap_dst);
|
||||
|
||||
if (skb->len > encap_mtu - headroom)
|
||||
skb_dst_update_pmtu(skb, encap_mtu - headroom);
|
||||
}
|
||||
|
||||
#endif /* _NET_DST_H */
|
||||
|
|
|
@ -159,6 +159,10 @@ struct fib6_info {
|
|||
struct rt6_info * __percpu *rt6i_pcpu;
|
||||
struct rt6_exception_bucket __rcu *rt6i_exception_bucket;
|
||||
|
||||
#ifdef CONFIG_IPV6_ROUTER_PREF
|
||||
unsigned long last_probe;
|
||||
#endif
|
||||
|
||||
u32 fib6_metric;
|
||||
u8 fib6_protocol;
|
||||
u8 fib6_type;
|
||||
|
|
|
@ -347,7 +347,7 @@ static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
|
|||
__u16 size;
|
||||
|
||||
size = ntohs(chunk->chunk_hdr->length);
|
||||
size -= sctp_datahdr_len(&chunk->asoc->stream);
|
||||
size -= sctp_datachk_len(&chunk->asoc->stream);
|
||||
|
||||
return size;
|
||||
}
|
||||
|
|
|
@ -876,6 +876,8 @@ struct sctp_transport {
|
|||
unsigned long sackdelay;
|
||||
__u32 sackfreq;
|
||||
|
||||
atomic_t mtu_info;
|
||||
|
||||
/* When was the last time that we heard from this transport? We use
|
||||
* this to pick new active and retran paths.
|
||||
*/
|
||||
|
|
|
@ -301,6 +301,7 @@ enum sctp_sinfo_flags {
|
|||
SCTP_SACK_IMMEDIATELY = (1 << 3), /* SACK should be sent without delay. */
|
||||
/* 2 bits here have been used by SCTP_PR_SCTP_MASK */
|
||||
SCTP_SENDALL = (1 << 6),
|
||||
SCTP_PR_SCTP_ALL = (1 << 7),
|
||||
SCTP_NOTIFICATION = MSG_NOTIFICATION, /* Next message is not user msg but notification. */
|
||||
SCTP_EOF = MSG_FIN, /* Initiate graceful shutdown process. */
|
||||
};
|
||||
|
|
|
@ -192,11 +192,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
|
|||
sock_hold(sock->sk);
|
||||
|
||||
old_xs = xchg(&m->xsk_map[i], xs);
|
||||
if (old_xs) {
|
||||
/* Make sure we've flushed everything. */
|
||||
synchronize_net();
|
||||
if (old_xs)
|
||||
sock_put((struct sock *)old_xs);
|
||||
}
|
||||
|
||||
sockfd_put(sock);
|
||||
return 0;
|
||||
|
@ -212,11 +209,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
|
|||
return -EINVAL;
|
||||
|
||||
old_xs = xchg(&m->xsk_map[k], NULL);
|
||||
if (old_xs) {
|
||||
/* Make sure we've flushed everything. */
|
||||
synchronize_net();
|
||||
if (old_xs)
|
||||
sock_put((struct sock *)old_xs);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -4002,7 +4002,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
|
|||
* put back on, and if we advance min_vruntime, we'll be placed back
|
||||
* further than we started -- ie. we'll be penalized.
|
||||
*/
|
||||
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
|
||||
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
|
||||
update_min_vruntime(cfs_rq);
|
||||
}
|
||||
|
||||
|
@ -4477,9 +4477,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
|
|||
|
||||
/*
|
||||
* Add to the _head_ of the list, so that an already-started
|
||||
* distribute_cfs_runtime will not see us
|
||||
* distribute_cfs_runtime will not see us. If disribute_cfs_runtime is
|
||||
* not running add to the tail so that later runqueues don't get starved.
|
||||
*/
|
||||
list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
|
||||
if (cfs_b->distribute_running)
|
||||
list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
|
||||
else
|
||||
list_add_tail_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
|
||||
|
||||
/*
|
||||
* If we're the first throttled task, make sure the bandwidth
|
||||
|
@ -4623,14 +4627,16 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
|
|||
* in us over-using our runtime if it is all used during this loop, but
|
||||
* only by limited amounts in that extreme case.
|
||||
*/
|
||||
while (throttled && cfs_b->runtime > 0) {
|
||||
while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) {
|
||||
runtime = cfs_b->runtime;
|
||||
cfs_b->distribute_running = 1;
|
||||
raw_spin_unlock(&cfs_b->lock);
|
||||
/* we can't nest cfs_b->lock while distributing bandwidth */
|
||||
runtime = distribute_cfs_runtime(cfs_b, runtime,
|
||||
runtime_expires);
|
||||
raw_spin_lock(&cfs_b->lock);
|
||||
|
||||
cfs_b->distribute_running = 0;
|
||||
throttled = !list_empty(&cfs_b->throttled_cfs_rq);
|
||||
|
||||
cfs_b->runtime -= min(runtime, cfs_b->runtime);
|
||||
|
@ -4741,6 +4747,11 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
|
|||
|
||||
/* confirm we're still not at a refresh boundary */
|
||||
raw_spin_lock(&cfs_b->lock);
|
||||
if (cfs_b->distribute_running) {
|
||||
raw_spin_unlock(&cfs_b->lock);
|
||||
return;
|
||||
}
|
||||
|
||||
if (runtime_refresh_within(cfs_b, min_bandwidth_expiration)) {
|
||||
raw_spin_unlock(&cfs_b->lock);
|
||||
return;
|
||||
|
@ -4750,6 +4761,9 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
|
|||
runtime = cfs_b->runtime;
|
||||
|
||||
expires = cfs_b->runtime_expires;
|
||||
if (runtime)
|
||||
cfs_b->distribute_running = 1;
|
||||
|
||||
raw_spin_unlock(&cfs_b->lock);
|
||||
|
||||
if (!runtime)
|
||||
|
@ -4760,6 +4774,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
|
|||
raw_spin_lock(&cfs_b->lock);
|
||||
if (expires == cfs_b->runtime_expires)
|
||||
cfs_b->runtime -= min(runtime, cfs_b->runtime);
|
||||
cfs_b->distribute_running = 0;
|
||||
raw_spin_unlock(&cfs_b->lock);
|
||||
}
|
||||
|
||||
|
@ -4868,6 +4883,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
|
|||
cfs_b->period_timer.function = sched_cfs_period_timer;
|
||||
hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
|
||||
cfs_b->slack_timer.function = sched_cfs_slack_timer;
|
||||
cfs_b->distribute_running = 0;
|
||||
}
|
||||
|
||||
static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
|
||||
|
|
|
@ -346,6 +346,8 @@ struct cfs_bandwidth {
|
|||
int nr_periods;
|
||||
int nr_throttled;
|
||||
u64 throttled_time;
|
||||
|
||||
bool distribute_running;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
@ -5,12 +5,12 @@
|
|||
* Copyright (C) 2018 Joel Fernandes (Google) <joel@joelfernandes.org>
|
||||
*/
|
||||
|
||||
#include <linux/trace_clock.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/string.h>
|
||||
|
@ -25,13 +25,13 @@ MODULE_PARM_DESC(test_mode, "Mode of the test such as preempt or irq (default ir
|
|||
|
||||
static void busy_wait(ulong time)
|
||||
{
|
||||
ktime_t start, end;
|
||||
start = ktime_get();
|
||||
u64 start, end;
|
||||
start = trace_clock_local();
|
||||
do {
|
||||
end = ktime_get();
|
||||
end = trace_clock_local();
|
||||
if (kthread_should_stop())
|
||||
break;
|
||||
} while (ktime_to_ns(ktime_sub(end, start)) < (time * 1000));
|
||||
} while ((end - start) < (time * 1000));
|
||||
}
|
||||
|
||||
static int preemptirq_delay_run(void *data)
|
||||
|
|
|
@ -738,16 +738,30 @@ static void free_synth_field(struct synth_field *field)
|
|||
kfree(field);
|
||||
}
|
||||
|
||||
static struct synth_field *parse_synth_field(char *field_type,
|
||||
char *field_name)
|
||||
static struct synth_field *parse_synth_field(int argc, char **argv,
|
||||
int *consumed)
|
||||
{
|
||||
struct synth_field *field;
|
||||
const char *prefix = NULL;
|
||||
char *field_type = argv[0], *field_name;
|
||||
int len, ret = 0;
|
||||
char *array;
|
||||
|
||||
if (field_type[0] == ';')
|
||||
field_type++;
|
||||
|
||||
if (!strcmp(field_type, "unsigned")) {
|
||||
if (argc < 3)
|
||||
return ERR_PTR(-EINVAL);
|
||||
prefix = "unsigned ";
|
||||
field_type = argv[1];
|
||||
field_name = argv[2];
|
||||
*consumed = 3;
|
||||
} else {
|
||||
field_name = argv[1];
|
||||
*consumed = 2;
|
||||
}
|
||||
|
||||
len = strlen(field_name);
|
||||
if (field_name[len - 1] == ';')
|
||||
field_name[len - 1] = '\0';
|
||||
|
@ -760,11 +774,15 @@ static struct synth_field *parse_synth_field(char *field_type,
|
|||
array = strchr(field_name, '[');
|
||||
if (array)
|
||||
len += strlen(array);
|
||||
if (prefix)
|
||||
len += strlen(prefix);
|
||||
field->type = kzalloc(len, GFP_KERNEL);
|
||||
if (!field->type) {
|
||||
ret = -ENOMEM;
|
||||
goto free;
|
||||
}
|
||||
if (prefix)
|
||||
strcat(field->type, prefix);
|
||||
strcat(field->type, field_type);
|
||||
if (array) {
|
||||
strcat(field->type, array);
|
||||
|
@ -1009,7 +1027,7 @@ static int create_synth_event(int argc, char **argv)
|
|||
struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
|
||||
struct synth_event *event = NULL;
|
||||
bool delete_event = false;
|
||||
int i, n_fields = 0, ret = 0;
|
||||
int i, consumed = 0, n_fields = 0, ret = 0;
|
||||
char *name;
|
||||
|
||||
mutex_lock(&synth_event_mutex);
|
||||
|
@ -1061,16 +1079,16 @@ static int create_synth_event(int argc, char **argv)
|
|||
goto err;
|
||||
}
|
||||
|
||||
field = parse_synth_field(argv[i], argv[i + 1]);
|
||||
field = parse_synth_field(argc - i, &argv[i], &consumed);
|
||||
if (IS_ERR(field)) {
|
||||
ret = PTR_ERR(field);
|
||||
goto err;
|
||||
}
|
||||
fields[n_fields] = field;
|
||||
i++; n_fields++;
|
||||
fields[n_fields++] = field;
|
||||
i += consumed - 1;
|
||||
}
|
||||
|
||||
if (i < argc) {
|
||||
if (i < argc && strcmp(argv[i], ";") != 0) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
|
|
@ -28,8 +28,8 @@
|
|||
#include <linux/sched/task.h>
|
||||
#include <linux/static_key.h>
|
||||
|
||||
extern struct tracepoint * const __start___tracepoints_ptrs[];
|
||||
extern struct tracepoint * const __stop___tracepoints_ptrs[];
|
||||
extern tracepoint_ptr_t __start___tracepoints_ptrs[];
|
||||
extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
|
||||
|
||||
DEFINE_SRCU(tracepoint_srcu);
|
||||
EXPORT_SYMBOL_GPL(tracepoint_srcu);
|
||||
|
@ -371,25 +371,17 @@ int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
|
||||
|
||||
static void for_each_tracepoint_range(struct tracepoint * const *begin,
|
||||
struct tracepoint * const *end,
|
||||
static void for_each_tracepoint_range(
|
||||
tracepoint_ptr_t *begin, tracepoint_ptr_t *end,
|
||||
void (*fct)(struct tracepoint *tp, void *priv),
|
||||
void *priv)
|
||||
{
|
||||
tracepoint_ptr_t *iter;
|
||||
|
||||
if (!begin)
|
||||
return;
|
||||
|
||||
if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) {
|
||||
const int *iter;
|
||||
|
||||
for (iter = (const int *)begin; iter < (const int *)end; iter++)
|
||||
fct(offset_to_ptr(iter), priv);
|
||||
} else {
|
||||
struct tracepoint * const *iter;
|
||||
|
||||
for (iter = begin; iter < end; iter++)
|
||||
fct(*iter, priv);
|
||||
}
|
||||
for (iter = begin; iter < end; iter++)
|
||||
fct(tracepoint_ptr_deref(iter), priv);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
|
|
@ -150,10 +150,10 @@ static void ida_check_conv(struct ida *ida)
|
|||
IDA_BUG_ON(ida, !ida_is_empty(ida));
|
||||
}
|
||||
|
||||
static DEFINE_IDA(ida);
|
||||
|
||||
static int ida_checks(void)
|
||||
{
|
||||
DEFINE_IDA(ida);
|
||||
|
||||
IDA_BUG_ON(&ida, !ida_is_empty(&ida));
|
||||
ida_check_alloc(&ida);
|
||||
ida_check_destroy(&ida);
|
||||
|
|
|
@ -1780,7 +1780,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
|
|||
|
||||
bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
|
||||
unsigned long new_addr, unsigned long old_end,
|
||||
pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
|
||||
pmd_t *old_pmd, pmd_t *new_pmd)
|
||||
{
|
||||
spinlock_t *old_ptl, *new_ptl;
|
||||
pmd_t pmd;
|
||||
|
@ -1811,7 +1811,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
|
|||
if (new_ptl != old_ptl)
|
||||
spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
|
||||
pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
|
||||
if (pmd_present(pmd) && pmd_dirty(pmd))
|
||||
if (pmd_present(pmd))
|
||||
force_flush = true;
|
||||
VM_BUG_ON(!pmd_none(*new_pmd));
|
||||
|
||||
|
@ -1822,12 +1822,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
|
|||
}
|
||||
pmd = move_soft_dirty_pmd(pmd);
|
||||
set_pmd_at(mm, new_addr, new_pmd, pmd);
|
||||
if (new_ptl != old_ptl)
|
||||
spin_unlock(new_ptl);
|
||||
if (force_flush)
|
||||
flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
|
||||
else
|
||||
*need_flush = true;
|
||||
if (new_ptl != old_ptl)
|
||||
spin_unlock(new_ptl);
|
||||
spin_unlock(old_ptl);
|
||||
return true;
|
||||
}
|
||||
|
|
30
mm/mremap.c
30
mm/mremap.c
|
@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
|
|||
static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
|
||||
unsigned long old_addr, unsigned long old_end,
|
||||
struct vm_area_struct *new_vma, pmd_t *new_pmd,
|
||||
unsigned long new_addr, bool need_rmap_locks, bool *need_flush)
|
||||
unsigned long new_addr, bool need_rmap_locks)
|
||||
{
|
||||
struct mm_struct *mm = vma->vm_mm;
|
||||
pte_t *old_pte, *new_pte, pte;
|
||||
|
@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
|
|||
|
||||
pte = ptep_get_and_clear(mm, old_addr, old_pte);
|
||||
/*
|
||||
* If we are remapping a dirty PTE, make sure
|
||||
* If we are remapping a valid PTE, make sure
|
||||
* to flush TLB before we drop the PTL for the
|
||||
* old PTE or we may race with page_mkclean().
|
||||
* PTE.
|
||||
*
|
||||
* This check has to be done after we removed the
|
||||
* old PTE from page tables or another thread may
|
||||
* dirty it after the check and before the removal.
|
||||
* NOTE! Both old and new PTL matter: the old one
|
||||
* for racing with page_mkclean(), the new one to
|
||||
* make sure the physical page stays valid until
|
||||
* the TLB entry for the old mapping has been
|
||||
* flushed.
|
||||
*/
|
||||
if (pte_present(pte) && pte_dirty(pte))
|
||||
if (pte_present(pte))
|
||||
force_flush = true;
|
||||
pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);
|
||||
pte = move_soft_dirty_pte(pte);
|
||||
|
@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
|
|||
}
|
||||
|
||||
arch_leave_lazy_mmu_mode();
|
||||
if (force_flush)
|
||||
flush_tlb_range(vma, old_end - len, old_end);
|
||||
if (new_ptl != old_ptl)
|
||||
spin_unlock(new_ptl);
|
||||
pte_unmap(new_pte - 1);
|
||||
if (force_flush)
|
||||
flush_tlb_range(vma, old_end - len, old_end);
|
||||
else
|
||||
*need_flush = true;
|
||||
pte_unmap_unlock(old_pte - 1, old_ptl);
|
||||
if (need_rmap_locks)
|
||||
drop_rmap_locks(vma);
|
||||
|
@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
|
|||
{
|
||||
unsigned long extent, next, old_end;
|
||||
pmd_t *old_pmd, *new_pmd;
|
||||
bool need_flush = false;
|
||||
unsigned long mmun_start; /* For mmu_notifiers */
|
||||
unsigned long mmun_end; /* For mmu_notifiers */
|
||||
|
||||
|
@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
|
|||
if (need_rmap_locks)
|
||||
take_rmap_locks(vma);
|
||||
moved = move_huge_pmd(vma, old_addr, new_addr,
|
||||
old_end, old_pmd, new_pmd,
|
||||
&need_flush);
|
||||
old_end, old_pmd, new_pmd);
|
||||
if (need_rmap_locks)
|
||||
drop_rmap_locks(vma);
|
||||
if (moved)
|
||||
|
@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
|
|||
if (extent > next - new_addr)
|
||||
extent = next - new_addr;
|
||||
move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
|
||||
new_pmd, new_addr, need_rmap_locks, &need_flush);
|
||||
new_pmd, new_addr, need_rmap_locks);
|
||||
}
|
||||
if (need_flush)
|
||||
flush_tlb_range(vma, old_end-len, old_addr);
|
||||
|
||||
mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
|
||||
|
||||
|
|
|
@ -23,9 +23,11 @@ static void shutdown_umh(struct umh_info *info)
|
|||
|
||||
if (!info->pid)
|
||||
return;
|
||||
tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID);
|
||||
if (tsk)
|
||||
tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);
|
||||
if (tsk) {
|
||||
force_sig(SIGKILL, tsk);
|
||||
put_task_struct(tsk);
|
||||
}
|
||||
fput(info->pipe_to_umh);
|
||||
fput(info->pipe_from_umh);
|
||||
info->pid = 0;
|
||||
|
|
|
@ -1015,6 +1015,9 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (info.cmd != cmd)
|
||||
return -EINVAL;
|
||||
|
||||
if (info.cmd == ETHTOOL_GRXCLSRLALL) {
|
||||
if (info.rule_cnt > 0) {
|
||||
if (info.rule_cnt <= KMALLOC_MAX_SIZE / sizeof(u32))
|
||||
|
@ -2469,13 +2472,17 @@ static int ethtool_set_per_queue_coalesce(struct net_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr)
|
||||
static int ethtool_set_per_queue(struct net_device *dev,
|
||||
void __user *useraddr, u32 sub_cmd)
|
||||
{
|
||||
struct ethtool_per_queue_op per_queue_opt;
|
||||
|
||||
if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt)))
|
||||
return -EFAULT;
|
||||
|
||||
if (per_queue_opt.sub_command != sub_cmd)
|
||||
return -EINVAL;
|
||||
|
||||
switch (per_queue_opt.sub_command) {
|
||||
case ETHTOOL_GCOALESCE:
|
||||
return ethtool_get_per_queue_coalesce(dev, useraddr, &per_queue_opt);
|
||||
|
@ -2846,7 +2853,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
|
|||
rc = ethtool_get_phy_stats(dev, useraddr);
|
||||
break;
|
||||
case ETHTOOL_PERQUEUE:
|
||||
rc = ethtool_set_per_queue(dev, useraddr);
|
||||
rc = ethtool_set_per_queue(dev, useraddr, sub_cmd);
|
||||
break;
|
||||
case ETHTOOL_GLINKSETTINGS:
|
||||
rc = ethtool_get_link_ksettings(dev, useraddr);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue