Commit a0e91c4a authored by Stephen Rothwell's avatar Stephen Rothwell

Merge remote-tracking branch 'icc/icc-next'

parents ec6724f3 714e53a7
Interconnect Provider Device Tree Bindings
=========================================
The purpose of this document is to define a common set of generic interconnect
providers/consumers properties.
= interconnect providers =
The interconnect provider binding is intended to represent the interconnect
controllers in the system. Each provider registers a set of interconnect
nodes, which expose the interconnect related capabilities of the interconnect
to consumer drivers. These capabilities can be throughput, latency, priority
etc. The consumer drivers set constraints on interconnect path (or endpoints)
depending on the use case. Interconnect providers can also be interconnect
consumers, such as in the case where two network-on-chip fabrics interface
directly.
Required properties:
- compatible : contains the interconnect provider compatible string
- #interconnect-cells : number of cells in a interconnect specifier needed to
encode the interconnect node id
Example:
snoc: interconnect@580000 {
compatible = "qcom,msm8916-snoc";
#interconnect-cells = <1>;
reg = <0x580000 0x14000>;
clock-names = "bus_clk", "bus_a_clk";
clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
<&rpmcc RPM_SMD_SNOC_A_CLK>;
};
= interconnect consumers =
The interconnect consumers are device nodes which dynamically express their
bandwidth requirements along interconnect paths they are connected to. There
can be multiple interconnect providers on a SoC and the consumer may consume
multiple paths from different providers depending on use case and the
components it has to interact with.
Required properties:
interconnects : Pairs of phandles and interconnect provider specifier to denote
the edge source and destination ports of the interconnect path.
Optional properties:
interconnect-names : List of interconnect path name strings sorted in the same
order as the interconnects property. Consumers drivers will use
interconnect-names to match interconnect paths with interconnect
specifier pairs.
Example:
sdhci@7864000 {
...
interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
interconnect-names = "sdhc-mem";
};
Qualcomm SDM845 Network-On-Chip interconnect driver binding
-----------------------------------------------------------
SDM845 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must reside within
an RPMh device node pertaining to their RSC and each provider maps to a single
RPMh resource.
Required properties :
- compatible : shall contain only one of the following:
"qcom,sdm845-rsc-hlos"
- #interconnect-cells : should contain 1
Examples:
apps_rsc: rsc {
rsc_hlos: interconnect {
compatible = "qcom,sdm845-rsc-hlos";
#interconnect-cells = <1>;
};
};
.. SPDX-License-Identifier: GPL-2.0
=====================================
GENERIC SYSTEM INTERCONNECT SUBSYSTEM
=====================================
Introduction
------------
This framework is designed to provide a standard kernel interface to control
the settings of the interconnects on an SoC. These settings can be throughput,
latency and priority between multiple interconnected devices or functional
blocks. This can be controlled dynamically in order to save power or provide
maximum performance.
The interconnect bus is hardware with configurable parameters, which can be
set on a data path according to the requests received from various drivers.
An example of interconnect buses are the interconnects between various
components or functional blocks in chipsets. There can be multiple interconnects
on an SoC that can be multi-tiered.
Below is a simplified diagram of a real-world SoC interconnect bus topology.
::
+----------------+ +----------------+
| HW Accelerator |--->| M NoC |<---------------+
+----------------+ +----------------+ |
| | +------------+
+-----+ +-------------+ V +------+ | |
| DDR | | +--------+ | PCIe | | |
+-----+ | | Slaves | +------+ | |
^ ^ | +--------+ | | C NoC |
| | V V | |
+------------------+ +------------------------+ | | +-----+
| |-->| |-->| |-->| CPU |
| |-->| |<--| | +-----+
| Mem NoC | | S NoC | +------------+
| |<--| |---------+ |
| |<--| |<------+ | | +--------+
+------------------+ +------------------------+ | | +-->| Slaves |
^ ^ ^ ^ ^ | | +--------+
| | | | | | V
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
| CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
|
+-------+
| Modem |
+-------+
Terminology
-----------
Interconnect provider is the software definition of the interconnect hardware.
The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
and Mem NoC.
Interconnect node is the software definition of the interconnect hardware
port. Each interconnect provider consists of multiple interconnect nodes,
which are connected to other SoC components including other interconnect
providers. The point on the diagram where the CPUs connect to the memory is
called an interconnect node, which belongs to the Mem NoC interconnect provider.
Interconnect endpoints are the first or the last element of the path. Every
endpoint is a node, but not every node is an endpoint.
Interconnect path is everything between two endpoints including all the nodes
that have to be traversed to reach from a source to destination node. It may
include multiple master-slave pairs across several interconnect providers.
Interconnect consumers are the entities which make use of the data paths exposed
by the providers. The consumers send requests to providers requesting various
throughput, latency and priority. Usually the consumers are device drivers, that
send request based on their needs. An example for a consumer is a video decoder
that supports various formats and image sizes.
Interconnect providers
----------------------
Interconnect provider is an entity that implements methods to initialize and
configure interconnect bus hardware. The interconnect provider drivers should
be registered with the interconnect provider core.
.. kernel-doc:: include/linux/interconnect-provider.h
Interconnect consumers
----------------------
Interconnect consumers are the clients which use the interconnect APIs to
get paths between endpoints and set their bandwidth/latency/QoS requirements
for these interconnect paths.
.. kernel-doc:: include/linux/interconnect.h
......@@ -7921,6 +7921,16 @@ L: linux-gpio@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-intel-mid.c
INTERCONNECT API
M: Georgi Djakov <georgi.djakov@linaro.org>
S: Maintained
F: Documentation/interconnect/
F: Documentation/devicetree/bindings/interconnect/
F: drivers/interconnect/
F: include/dt-bindings/interconnect/
F: include/linux/interconnect-provider.h
F: include/linux/interconnect.h
INVENSENSE MPU-3050 GYROSCOPE DRIVER
M: Linus Walleij <linus.walleij@linaro.org>
L: linux-iio@vger.kernel.org
......
......@@ -8,6 +8,7 @@
#include <dt-bindings/clock/qcom,dispcc-sdm845.h>
#include <dt-bindings/clock/qcom,gcc-sdm845.h>
#include <dt-bindings/clock/qcom,rpmh.h>
#include <dt-bindings/interconnect/qcom,sdm845.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/phy/phy-qcom-qusb2.h>
#include <dt-bindings/reset/qcom,sdm845-aoss.h>
......@@ -1765,6 +1766,11 @@
compatible = "qcom,sdm845-rpmh-clk";
#clock-cells = <1>;
};
rsc_hlos: interconnect {
compatible = "qcom,sdm845-rsc-hlos";
#interconnect-cells = <1>;
};
};
intc: interrupt-controller@17a00000 {
......
......@@ -228,4 +228,6 @@ source "drivers/siox/Kconfig"
source "drivers/slimbus/Kconfig"
source "drivers/interconnect/Kconfig"
endmenu
......@@ -186,3 +186,4 @@ obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/
obj-$(CONFIG_GNSS) += gnss/
obj-$(CONFIG_INTERCONNECT) += interconnect/
menuconfig INTERCONNECT
tristate "On-Chip Interconnect management support"
help
Support for management of the on-chip interconnects.
This framework is designed to provide a generic interface for
managing the interconnects in a SoC.
If unsure, say no.
if INTERCONNECT
source "drivers/interconnect/qcom/Kconfig"
endif
# SPDX-License-Identifier: GPL-2.0
icc-core-objs := core.o
obj-$(CONFIG_INTERCONNECT) += icc-core.o
obj-$(CONFIG_INTERCONNECT_QCOM) += qcom/
This diff is collapsed.
config INTERCONNECT_QCOM
bool "Qualcomm Network-on-Chip interconnect drivers"
depends on ARCH_QCOM
help
Support for Qualcomm's Network-on-Chip interconnect hardware.
config INTERCONNECT_QCOM_SDM845
tristate "Qualcomm SDM845 interconnect driver"
depends on INTERCONNECT_QCOM
depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
help
This is a driver for the Qualcomm Network-on-Chip on sdm845-based
platforms.
# SPDX-License-Identifier: GPL-2.0
qnoc-sdm845-objs := sdm845.o
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Qualcomm SDM845 interconnect IDs
*
* Copyright (c) 2018, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#ifndef __DT_BINDINGS_INTERCONNECT_QCOM_SDM845_H
#define __DT_BINDINGS_INTERCONNECT_QCOM_SDM845_H
#define MASTER_A1NOC_CFG 0
#define MASTER_BLSP_1 1
#define MASTER_TSIF 2
#define MASTER_SDCC_2 3
#define MASTER_SDCC_4 4
#define MASTER_UFS_CARD 5
#define MASTER_UFS_MEM 6
#define MASTER_PCIE_0 7
#define MASTER_A2NOC_CFG 8
#define MASTER_QDSS_BAM 9
#define MASTER_BLSP_2 10
#define MASTER_CNOC_A2NOC 11
#define MASTER_CRYPTO 12
#define MASTER_IPA 13
#define MASTER_PCIE_1 14
#define MASTER_QDSS_ETR 15
#define MASTER_USB3_0 16
#define MASTER_USB3_1 17
#define MASTER_CAMNOC_HF0_UNCOMP 18
#define MASTER_CAMNOC_HF1_UNCOMP 19
#define MASTER_CAMNOC_SF_UNCOMP 20
#define MASTER_SPDM 21
#define MASTER_TIC 22
#define MASTER_SNOC_CNOC 23
#define MASTER_QDSS_DAP 24
#define MASTER_CNOC_DC_NOC 25
#define MASTER_APPSS_PROC 26
#define MASTER_GNOC_CFG 27
#define MASTER_LLCC 28
#define MASTER_TCU_0 29
#define MASTER_MEM_NOC_CFG 30
#define MASTER_GNOC_MEM_NOC 31
#define MASTER_MNOC_HF_MEM_NOC 32
#define MASTER_MNOC_SF_MEM_NOC 33
#define MASTER_SNOC_GC_MEM_NOC 34
#define MASTER_SNOC_SF_MEM_NOC 35
#define MASTER_GFX3D 36
#define MASTER_CNOC_MNOC_CFG 37
#define MASTER_CAMNOC_HF0 38
#define MASTER_CAMNOC_HF1 39
#define MASTER_CAMNOC_SF 40
#define MASTER_MDP0 41
#define MASTER_MDP1 42
#define MASTER_ROTATOR 43
#define MASTER_VIDEO_P0 44
#define MASTER_VIDEO_P1 45
#define MASTER_VIDEO_PROC 46
#define MASTER_SNOC_CFG 47
#define MASTER_A1NOC_SNOC 48
#define MASTER_A2NOC_SNOC 49
#define MASTER_GNOC_SNOC 50
#define MASTER_MEM_NOC_SNOC 51
#define MASTER_ANOC_PCIE_SNOC 52
#define MASTER_PIMEM 53
#define MASTER_GIC 54
#define SLAVE_A1NOC_SNOC 55
#define SLAVE_SERVICE_A1NOC 56
#define SLAVE_ANOC_PCIE_A1NOC_SNOC 57
#define SLAVE_A2NOC_SNOC 58
#define SLAVE_ANOC_PCIE_SNOC 59
#define SLAVE_SERVICE_A2NOC 60
#define SLAVE_CAMNOC_UNCOMP 61
#define SLAVE_A1NOC_CFG 62
#define SLAVE_A2NOC_CFG 63
#define SLAVE_AOP 64
#define SLAVE_AOSS 65
#define SLAVE_CAMERA_CFG 66
#define SLAVE_CLK_CTL 67
#define SLAVE_CDSP_CFG 68
#define SLAVE_RBCPR_CX_CFG 69
#define SLAVE_CRYPTO_0_CFG 70
#define SLAVE_DCC_CFG 71
#define SLAVE_CNOC_DDRSS 72
#define SLAVE_DISPLAY_CFG 73
#define SLAVE_GLM 74
#define SLAVE_GFX3D_CFG 75
#define SLAVE_IMEM_CFG 76
#define SLAVE_IPA_CFG 77
#define SLAVE_CNOC_MNOC_CFG 78
#define SLAVE_PCIE_0_CFG 79
#define SLAVE_PCIE_1_CFG 80
#define SLAVE_PDM 81
#define SLAVE_SOUTH_PHY_CFG 82
#define SLAVE_PIMEM_CFG 83
#define SLAVE_PRNG 84
#define SLAVE_QDSS_CFG 85
#define SLAVE_BLSP_2 86
#define SLAVE_BLSP_1 87
#define SLAVE_SDCC_2 88
#define SLAVE_SDCC_4 89
#define SLAVE_SNOC_CFG 90
#define SLAVE_SPDM_WRAPPER 91
#define SLAVE_SPSS_CFG 92
#define SLAVE_TCSR 93
#define SLAVE_TLMM_NORTH 94
#define SLAVE_TLMM_SOUTH 95
#define SLAVE_TSIF 96
#define SLAVE_UFS_CARD_CFG 97
#define SLAVE_UFS_MEM_CFG 98
#define SLAVE_USB3_0 99
#define SLAVE_USB3_1 100
#define SLAVE_VENUS_CFG 101
#define SLAVE_VSENSE_CTRL_CFG 102
#define SLAVE_CNOC_A2NOC 103
#define SLAVE_SERVICE_CNOC 104
#define SLAVE_LLCC_CFG 105
#define SLAVE_MEM_NOC_CFG 106
#define SLAVE_GNOC_SNOC 107
#define SLAVE_GNOC_MEM_NOC 108
#define SLAVE_SERVICE_GNOC 109
#define SLAVE_EBI1 110
#define SLAVE_MSS_PROC_MS_MPU_CFG 111
#define SLAVE_MEM_NOC_GNOC 112
#define SLAVE_LLCC 113
#define SLAVE_MEM_NOC_SNOC 114
#define SLAVE_SERVICE_MEM_NOC 115
#define SLAVE_MNOC_SF_MEM_NOC 116
#define SLAVE_MNOC_HF_MEM_NOC 117
#define SLAVE_SERVICE_MNOC 118
#define SLAVE_APPSS 119
#define SLAVE_SNOC_CNOC 120
#define SLAVE_SNOC_MEM_NOC_GC 121
#define SLAVE_SNOC_MEM_NOC_SF 122
#define SLAVE_IMEM 123
#define SLAVE_PCIE_0 124
#define SLAVE_PCIE_1 125
#define SLAVE_PIMEM 126
#define SLAVE_SERVICE_SNOC 127
#define SLAVE_QDSS_STM 128
#define SLAVE_TCU 129
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#ifndef __LINUX_INTERCONNECT_PROVIDER_H
#define __LINUX_INTERCONNECT_PROVIDER_H
#include <linux/interconnect.h>
#define icc_units_to_bps(bw) ((bw) * 1000ULL)
struct icc_node;
struct of_phandle_args;
/**
* struct icc_onecell_data - driver data for onecell interconnect providers
*
* @num_nodes: number of nodes in this device
* @nodes: array of pointers to the nodes in this device
*/
struct icc_onecell_data {
unsigned int num_nodes;
struct icc_node *nodes[];
};
struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
void *data);
/**
* struct icc_provider - interconnect provider (controller) entity that might
* provide multiple interconnect controls
*
* @provider_list: list of the registered interconnect providers
* @nodes: internal list of the interconnect provider nodes
* @set: pointer to device specific set operation function
* @aggregate: pointer to device specific aggregate operation function
* @xlate: provider-specific callback for mapping nodes from phandle arguments
* @dev: the device this interconnect provider belongs to
* @users: count of active users
* @data: pointer to private data
*/
struct icc_provider {
struct list_head provider_list;
struct list_head nodes;
int (*set)(struct icc_node *src, struct icc_node *dst);
int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
u32 *agg_avg, u32 *agg_peak);
struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data);
struct device *dev;
int users;
void *data;
};
/**
* struct icc_node - entity that is part of the interconnect topology
*
* @id: platform specific node id
* @name: node name used in debugfs
* @links: a list of targets pointing to where we can go next when traversing
* @num_links: number of links to other interconnect nodes
* @provider: points to the interconnect provider of this node
* @node_list: the list entry in the parent provider's "nodes" list
* @search_list: list used when walking the nodes graph
* @reverse: pointer to previous node when walking the nodes graph
* @is_traversed: flag that is used when walking the nodes graph
* @req_list: a list of QoS constraint requests associated with this node
* @avg_bw: aggregated value of average bandwidth requests from all consumers
* @peak_bw: aggregated value of peak bandwidth requests from all consumers
* @data: pointer to private data
*/
struct icc_node {
int id;
const char *name;
struct icc_node **links;
size_t num_links;
struct icc_provider *provider;
struct list_head node_list;
struct list_head search_list;
struct icc_node *reverse;
u8 is_traversed:1;
struct hlist_head req_list;
u32 avg_bw;
u32 peak_bw;
void *data;
};
#if IS_ENABLED(CONFIG_INTERCONNECT)
struct icc_node *icc_node_create(int id);
void icc_node_destroy(int id);
int icc_link_create(struct icc_node *node, const int dst_id);
int icc_link_destroy(struct icc_node *src, struct icc_node *dst);
void icc_node_add(struct icc_node *node, struct icc_provider *provider);
void icc_node_del(struct icc_node *node);
int icc_provider_add(struct icc_provider *provider);
int icc_provider_del(struct icc_provider *provider);
#else
static inline struct icc_node *icc_node_create(int id)
{
return ERR_PTR(-ENOTSUPP);
}
void icc_node_destroy(int id)
{
}
static inline int icc_link_create(struct icc_node *node, const int dst_id)
{
return -ENOTSUPP;
}
int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
{
return -ENOTSUPP;
}
void icc_node_add(struct icc_node *node, struct icc_provider *provider)
{
}
void icc_node_del(struct icc_node *node)
{
}
static inline int icc_provider_add(struct icc_provider *provider)
{
return -ENOTSUPP;
}
static inline int icc_provider_del(struct icc_provider *provider)
{
return -ENOTSUPP;
}
#endif /* CONFIG_INTERCONNECT */
#endif /* __LINUX_INTERCONNECT_PROVIDER_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018-2019, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#ifndef __LINUX_INTERCONNECT_H
#define __LINUX_INTERCONNECT_H
#include <linux/mutex.h>
#include <linux/types.h>
/* macros for converting to icc units */
#define Bps_to_icc(x) ((x) / 1000)
#define kBps_to_icc(x) (x)
#define MBps_to_icc(x) ((x) * 1000)
#define GBps_to_icc(x) ((x) * 1000 * 1000)
#define bps_to_icc(x) (1)
#define kbps_to_icc(x) ((x) / 8 + ((x) % 8 ? 1 : 0))
#define Mbps_to_icc(x) ((x) * 1000 / 8)
#define Gbps_to_icc(x) ((x) * 1000 * 1000 / 8)
struct icc_path;
struct device;
#if IS_ENABLED(CONFIG_INTERCONNECT)
struct icc_path *icc_get(struct device *dev, const int src_id,
const int dst_id);
struct icc_path *of_icc_get(struct device *dev, const char *name);
void icc_put(struct icc_path *path);
int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw);
#else
static inline struct icc_path *icc_get(struct device *dev, const int src_id,
const int dst_id)
{
return NULL;
}
static inline struct icc_path *of_icc_get(struct device *dev,
const char *name)
{
return NULL;
}
static inline void icc_put(struct icc_path *path)
{
}
static inline int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
{
return 0;
}
#endif /* CONFIG_INTERCONNECT */
#endif /* __LINUX_INTERCONNECT_H */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment