Home Home > GIT Browse > openSUSE-15.0
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKernel Build Daemon <kbuild@suse.de>2019-10-16 07:22:45 +0200
committerKernel Build Daemon <kbuild@suse.de>2019-10-16 07:22:45 +0200
commit41a6bcd4cf5694fe17fffa387ba588a89bd1b9a3 (patch)
treeb157541463d51f015e8e6b4f9229a65ac1c10d02
parent815fece854139c311b848a91250b03893d14915f (diff)
parenta9359cc0344b49946bd08ed91b86268966d9aea8 (diff)
Merge branch 'SLE15' into openSUSE-15.0openSUSE-15.0
-rw-r--r--patches.suse/KVM-PPC-Book3S-HV-use-smp_mb-when-setting-clearing-h.patch465
-rw-r--r--patches.suse/net-ibmvnic-Fix-EOI-when-running-in-XIVE-mode.patch53
-rw-r--r--patches.suse/x86-mm-use-write_once-when-setting-ptes.patch142
-rw-r--r--series.conf5
4 files changed, 665 insertions, 0 deletions
diff --git a/patches.suse/KVM-PPC-Book3S-HV-use-smp_mb-when-setting-clearing-h.patch b/patches.suse/KVM-PPC-Book3S-HV-use-smp_mb-when-setting-clearing-h.patch
new file mode 100644
index 0000000000..fd55e8ff0a
--- /dev/null
+++ b/patches.suse/KVM-PPC-Book3S-HV-use-smp_mb-when-setting-clearing-h.patch
@@ -0,0 +1,465 @@
+From 3a83f677a6eeff65751b29e3648d7c69c3be83f3 Mon Sep 17 00:00:00 2001
+From: Michael Roth <mdroth@linux.vnet.ibm.com>
+Date: Wed, 11 Sep 2019 17:31:55 -0500
+Subject: [PATCH] KVM: PPC: Book3S HV: use smp_mb() when setting/clearing
+ host_ipi flag
+
+References: bsc#1061840
+Patch-mainline: v5.4-rc1
+Git-commit: 3a83f677a6eeff65751b29e3648d7c69c3be83f3
+
+On a 2-socket Power9 system with 32 cores/128 threads (SMT4) and 1TB
+of memory running the following guest configs:
+
+ guest A:
+ - 224GB of memory
+ - 56 VCPUs (sockets=1,cores=28,threads=2), where:
+ VCPUs 0-1 are pinned to CPUs 0-3,
+ VCPUs 2-3 are pinned to CPUs 4-7,
+ ...
+ VCPUs 54-55 are pinned to CPUs 108-111
+
+ guest B:
+ - 4GB of memory
+ - 4 VCPUs (sockets=1,cores=4,threads=1)
+
+with the following workloads (with KSM and THP enabled in all):
+
+ guest A:
+ stress --cpu 40 --io 20 --vm 20 --vm-bytes 512M
+
+ guest B:
+ stress --cpu 4 --io 4 --vm 4 --vm-bytes 512M
+
+ host:
+ stress --cpu 4 --io 4 --vm 2 --vm-bytes 256M
+
+the below soft-lockup traces were observed after an hour or so and
+persisted until the host was reset (this was found to be reliably
+reproducible for this configuration, for kernels 4.15, 4.18, 5.0,
+and 5.3-rc5):
+
+ [ 1253.183290] rcu: INFO: rcu_sched self-detected stall on CPU
+ [ 1253.183319] rcu: 124-....: (5250 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=1941
+ [ 1256.287426] watchdog: BUG: soft lockup - CPU#105 stuck for 23s! [CPU 52/KVM:19709]
+ [ 1264.075773] watchdog: BUG: soft lockup - CPU#24 stuck for 23s! [worker:19913]
+ [ 1264.079769] watchdog: BUG: soft lockup - CPU#31 stuck for 23s! [worker:20331]
+ [ 1264.095770] watchdog: BUG: soft lockup - CPU#45 stuck for 23s! [worker:20338]
+ [ 1264.131773] watchdog: BUG: soft lockup - CPU#64 stuck for 23s! [avocado:19525]
+ [ 1280.408480] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791]
+ [ 1316.198012] rcu: INFO: rcu_sched self-detected stall on CPU
+ [ 1316.198032] rcu: 124-....: (21003 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=8243
+ [ 1340.411024] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791]
+ [ 1379.212609] rcu: INFO: rcu_sched self-detected stall on CPU
+ [ 1379.212629] rcu: 124-....: (36756 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=14714
+ [ 1404.413615] watchdog: BUG: soft lockup - CPU#124 stuck for 22s! [ksmd:791]
+ [ 1442.227095] rcu: INFO: rcu_sched self-detected stall on CPU
+ [ 1442.227115] rcu: 124-....: (52509 ticks this GP) idle=10a/1/0x4000000000000002 softirq=5408/5408 fqs=21403
+ [ 1455.111787] INFO: task worker:19907 blocked for more than 120 seconds.
+ [ 1455.111822] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.111833] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.111884] INFO: task worker:19908 blocked for more than 120 seconds.
+ [ 1455.111905] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.111925] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.111966] INFO: task worker:20328 blocked for more than 120 seconds.
+ [ 1455.111986] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.111998] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.112048] INFO: task worker:20330 blocked for more than 120 seconds.
+ [ 1455.112068] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.112097] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.112138] INFO: task worker:20332 blocked for more than 120 seconds.
+ [ 1455.112159] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.112179] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.112210] INFO: task worker:20333 blocked for more than 120 seconds.
+ [ 1455.112231] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.112242] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.112282] INFO: task worker:20335 blocked for more than 120 seconds.
+ [ 1455.112303] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+ [ 1455.112332] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ [ 1455.112372] INFO: task worker:20336 blocked for more than 120 seconds.
+ [ 1455.112392] Tainted: G L 5.3.0-rc5-mdr-vanilla+ #1
+
+CPUs 45, 24, and 124 are stuck on spin locks, likely held by
+CPUs 105 and 31.
+
+CPUs 105 and 31 are stuck in smp_call_function_many(), waiting on
+target CPU 42. For instance:
+
+ # CPU 105 registers (via xmon)
+ R00 = c00000000020b20c R16 = 00007d1bcd800000
+ R01 = c00000363eaa7970 R17 = 0000000000000001
+ R02 = c0000000019b3a00 R18 = 000000000000006b
+ R03 = 000000000000002a R19 = 00007d537d7aecf0
+ R04 = 000000000000002a R20 = 60000000000000e0
+ R05 = 000000000000002a R21 = 0801000000000080
+ R06 = c0002073fb0caa08 R22 = 0000000000000d60
+ R07 = c0000000019ddd78 R23 = 0000000000000001
+ R08 = 000000000000002a R24 = c00000000147a700
+ R09 = 0000000000000001 R25 = c0002073fb0ca908
+ R10 = c000008ffeb4e660 R26 = 0000000000000000
+ R11 = c0002073fb0ca900 R27 = c0000000019e2464
+ R12 = c000000000050790 R28 = c0000000000812b0
+ R13 = c000207fff623e00 R29 = c0002073fb0ca808
+ R14 = 00007d1bbee00000 R30 = c0002073fb0ca800
+ R15 = 00007d1bcd600000 R31 = 0000000000000800
+ pc = c00000000020b260 smp_call_function_many+0x3d0/0x460
+ cfar= c00000000020b270 smp_call_function_many+0x3e0/0x460
+ lr = c00000000020b20c smp_call_function_many+0x37c/0x460
+ msr = 900000010288b033 cr = 44024824
+ ctr = c000000000050790 xer = 0000000000000000 trap = 100
+
+CPU 42 is running normally, doing VCPU work:
+
+ # CPU 42 stack trace (via xmon)
+ [link register ] c00800001be17188 kvmppc_book3s_radix_page_fault+0x90/0x2b0 [kvm_hv]
+ [c000008ed3343820] c000008ed3343850 (unreliable)
+ [c000008ed33438d0] c00800001be11b6c kvmppc_book3s_hv_page_fault+0x264/0xe30 [kvm_hv]
+ [c000008ed33439d0] c00800001be0d7b4 kvmppc_vcpu_run_hv+0x8dc/0xb50 [kvm_hv]
+ [c000008ed3343ae0] c00800001c10891c kvmppc_vcpu_run+0x34/0x48 [kvm]
+ [c000008ed3343b00] c00800001c10475c kvm_arch_vcpu_ioctl_run+0x244/0x420 [kvm]
+ [c000008ed3343b90] c00800001c0f5a78 kvm_vcpu_ioctl+0x470/0x7c8 [kvm]
+ [c000008ed3343d00] c000000000475450 do_vfs_ioctl+0xe0/0xc70
+ [c000008ed3343db0] c0000000004760e4 ksys_ioctl+0x104/0x120
+ [c000008ed3343e00] c000000000476128 sys_ioctl+0x28/0x80
+ [c000008ed3343e20] c00000000000b388 system_call+0x5c/0x70
+ --- Exception: c00 (System Call) at 00007d545cfd7694
+ SP (7d53ff7edf50) is in userspace
+
+It was subsequently found that ipi_message[PPC_MSG_CALL_FUNCTION]
+was set for CPU 42 by at least 1 of the CPUs waiting in
+smp_call_function_many(), but somehow the corresponding
+call_single_queue entries were never processed by CPU 42, causing the
+callers to spin in csd_lock_wait() indefinitely.
+
+Nick Piggin suggested something similar to the following sequence as
+a possible explanation (interleaving of CALL_FUNCTION/RESCHEDULE
+IPI messages seems to be most common, but any mix of CALL_FUNCTION and
+!CALL_FUNCTION messages could trigger it):
+
+ CPU
+ X: smp_muxed_ipi_set_message():
+ X: smp_mb()
+ X: message[RESCHEDULE] = 1
+ X: doorbell_global_ipi(42):
+ X: kvmppc_set_host_ipi(42, 1)
+ X: ppc_msgsnd_sync()/smp_mb()
+ X: ppc_msgsnd() -> 42
+ 42: doorbell_exception(): // from CPU X
+ 42: ppc_msgsync()
+ 105: smp_muxed_ipi_set_message():
+ 105: smb_mb()
+ // STORE DEFERRED DUE TO RE-ORDERING
+ --105: message[CALL_FUNCTION] = 1
+ | 105: doorbell_global_ipi(42):
+ | 105: kvmppc_set_host_ipi(42, 1)
+ | 42: kvmppc_set_host_ipi(42, 0)
+ | 42: smp_ipi_demux_relaxed()
+ | 42: // returns to executing guest
+ | // RE-ORDERED STORE COMPLETES
+ ->105: message[CALL_FUNCTION] = 1
+ 105: ppc_msgsnd_sync()/smp_mb()
+ 105: ppc_msgsnd() -> 42
+ 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored
+ 105: // hangs waiting on 42 to process messages/call_single_queue
+
+This can be prevented with an smp_mb() at the beginning of
+kvmppc_set_host_ipi(), such that stores to message[<type>] (or other
+state indicated by the host_ipi flag) are ordered vs. the store to
+to host_ipi.
+
+However, doing so might still allow for the following scenario (not
+yet observed):
+
+ CPU
+ X: smp_muxed_ipi_set_message():
+ X: smp_mb()
+ X: message[RESCHEDULE] = 1
+ X: doorbell_global_ipi(42):
+ X: kvmppc_set_host_ipi(42, 1)
+ X: ppc_msgsnd_sync()/smp_mb()
+ X: ppc_msgsnd() -> 42
+ 42: doorbell_exception(): // from CPU X
+ 42: ppc_msgsync()
+ // STORE DEFERRED DUE TO RE-ORDERING
+ -- 42: kvmppc_set_host_ipi(42, 0)
+ | 42: smp_ipi_demux_relaxed()
+ | 105: smp_muxed_ipi_set_message():
+ | 105: smb_mb()
+ | 105: message[CALL_FUNCTION] = 1
+ | 105: doorbell_global_ipi(42):
+ | 105: kvmppc_set_host_ipi(42, 1)
+ | // RE-ORDERED STORE COMPLETES
+ -> 42: kvmppc_set_host_ipi(42, 0)
+ 42: // returns to executing guest
+ 105: ppc_msgsnd_sync()/smp_mb()
+ 105: ppc_msgsnd() -> 42
+ 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored
+ 105: // hangs waiting on 42 to process messages/call_single_queue
+
+Fixing this scenario would require an smp_mb() *after* clearing
+host_ipi flag in kvmppc_set_host_ipi() to order the store vs.
+subsequent processing of IPI messages.
+
+To handle both cases, this patch splits kvmppc_set_host_ipi() into
+separate set/clear functions, where we execute smp_mb() prior to
+setting host_ipi flag, and after clearing host_ipi flag. These
+functions pair with each other to synchronize the sender and receiver
+sides.
+
+With that change in place the above workload ran for 20 hours without
+triggering any lock-ups.
+
+Fixes: 755563bc79c7 ("powerpc/powernv: Fixes for hypervisor doorbell handling") # v4.0
+Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
+Acked-by: Paul Mackerras <paulus@ozlabs.org>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/20190911223155.16045-1-mdroth@linux.vnet.ibm.com
+Acked-by: Michal Suchanek <msuchanek@suse.de>
+---
+ arch/powerpc/include/asm/kvm_ppc.h | 100 +++++++++++++++++++++++++-
+ arch/powerpc/kernel/dbell.c | 6 +-
+ arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
+ arch/powerpc/platforms/powernv/smp.c | 2 +-
+ arch/powerpc/sysdev/xics/icp-native.c | 6 +-
+ arch/powerpc/sysdev/xics/icp-opal.c | 6 +-
+ 6 files changed, 108 insertions(+), 14 deletions(-)
+
+--- a/arch/powerpc/include/asm/kvm_ppc.h
++++ b/arch/powerpc/include/asm/kvm_ppc.h
+@@ -452,9 +452,100 @@ static inline u32 kvmppc_get_xics_latch(
+ return xirr;
+ }
+
+-static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi)
++/*
++ * To avoid the need to unnecessarily exit fully to the host kernel, an IPI to
++ * a CPU thread that's running/napping inside of a guest is by default regarded
++ * as a request to wake the CPU (if needed) and continue execution within the
++ * guest, potentially to process new state like externally-generated
++ * interrupts or IPIs sent from within the guest itself (e.g. H_PROD/H_IPI).
++ *
++ * To force an exit to the host kernel, kvmppc_set_host_ipi() must be called
++ * prior to issuing the IPI to set the corresponding 'host_ipi' flag in the
++ * target CPU's PACA. To avoid unnecessary exits to the host, this flag should
++ * be immediately cleared via kvmppc_clear_host_ipi() by the IPI handler on
++ * the receiving side prior to processing the IPI work.
++ *
++ * NOTE:
++ *
++ * We currently issue an smp_mb() at the beginning of kvmppc_set_host_ipi().
++ * This is to guard against sequences such as the following:
++ *
++ * CPU
++ * X: smp_muxed_ipi_set_message():
++ * X: smp_mb()
++ * X: message[RESCHEDULE] = 1
++ * X: doorbell_global_ipi(42):
++ * X: kvmppc_set_host_ipi(42)
++ * X: ppc_msgsnd_sync()/smp_mb()
++ * X: ppc_msgsnd() -> 42
++ * 42: doorbell_exception(): // from CPU X
++ * 42: ppc_msgsync()
++ * 105: smp_muxed_ipi_set_message():
++ * 105: smb_mb()
++ * // STORE DEFERRED DUE TO RE-ORDERING
++ * --105: message[CALL_FUNCTION] = 1
++ * | 105: doorbell_global_ipi(42):
++ * | 105: kvmppc_set_host_ipi(42)
++ * | 42: kvmppc_clear_host_ipi(42)
++ * | 42: smp_ipi_demux_relaxed()
++ * | 42: // returns to executing guest
++ * | // RE-ORDERED STORE COMPLETES
++ * ->105: message[CALL_FUNCTION] = 1
++ * 105: ppc_msgsnd_sync()/smp_mb()
++ * 105: ppc_msgsnd() -> 42
++ * 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored
++ * 105: // hangs waiting on 42 to process messages/call_single_queue
++ *
++ * We also issue an smp_mb() at the end of kvmppc_clear_host_ipi(). This is
++ * to guard against sequences such as the following (as well as to create
++ * a read-side pairing with the barrier in kvmppc_set_host_ipi()):
++ *
++ * CPU
++ * X: smp_muxed_ipi_set_message():
++ * X: smp_mb()
++ * X: message[RESCHEDULE] = 1
++ * X: doorbell_global_ipi(42):
++ * X: kvmppc_set_host_ipi(42)
++ * X: ppc_msgsnd_sync()/smp_mb()
++ * X: ppc_msgsnd() -> 42
++ * 42: doorbell_exception(): // from CPU X
++ * 42: ppc_msgsync()
++ * // STORE DEFERRED DUE TO RE-ORDERING
++ * -- 42: kvmppc_clear_host_ipi(42)
++ * | 42: smp_ipi_demux_relaxed()
++ * | 105: smp_muxed_ipi_set_message():
++ * | 105: smb_mb()
++ * | 105: message[CALL_FUNCTION] = 1
++ * | 105: doorbell_global_ipi(42):
++ * | 105: kvmppc_set_host_ipi(42)
++ * | // RE-ORDERED STORE COMPLETES
++ * -> 42: kvmppc_clear_host_ipi(42)
++ * 42: // returns to executing guest
++ * 105: ppc_msgsnd_sync()/smp_mb()
++ * 105: ppc_msgsnd() -> 42
++ * 42: local_paca->kvm_hstate.host_ipi == 0 // IPI ignored
++ * 105: // hangs waiting on 42 to process messages/call_single_queue
++ */
++static inline void kvmppc_set_host_ipi(int cpu)
+ {
+- paca[cpu].kvm_hstate.host_ipi = host_ipi;
++ /*
++ * order stores of IPI messages vs. setting of host_ipi flag
++ *
++ * pairs with the barrier in kvmppc_clear_host_ipi()
++ */
++ smp_mb();
++ paca[cpu].kvm_hstate.host_ipi = 1;
++}
++
++static inline void kvmppc_clear_host_ipi(int cpu)
++{
++ paca[cpu].kvm_hstate.host_ipi = 0;
++ /*
++ * order clearing of host_ipi flag vs. processing of IPI messages
++ *
++ * pairs with the barrier in kvmppc_set_host_ipi()
++ */
++ smp_mb();
+ }
+
+ static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
+@@ -483,7 +574,10 @@ static inline u32 kvmppc_get_xics_latch(
+ return 0;
+ }
+
+-static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi)
++static inline void kvmppc_set_host_ipi(int cpu)
++{}
++
++static inline void kvmppc_clear_host_ipi(int cpu)
+ {}
+
+ static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
+diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
+index 804b1a6196fa..f17ff1200eaa 100644
+--- a/arch/powerpc/kernel/dbell.c
++++ b/arch/powerpc/kernel/dbell.c
+@@ -33,7 +33,7 @@ void doorbell_global_ipi(int cpu)
+ {
+ u32 tag = get_hard_smp_processor_id(cpu);
+
+- kvmppc_set_host_ipi(cpu, 1);
++ kvmppc_set_host_ipi(cpu);
+ /* Order previous accesses vs. msgsnd, which is treated as a store */
+ ppc_msgsnd_sync();
+ ppc_msgsnd(PPC_DBELL_MSGTYPE, 0, tag);
+@@ -48,7 +48,7 @@ void doorbell_core_ipi(int cpu)
+ {
+ u32 tag = cpu_thread_in_core(cpu);
+
+- kvmppc_set_host_ipi(cpu, 1);
++ kvmppc_set_host_ipi(cpu);
+ /* Order previous accesses vs. msgsnd, which is treated as a store */
+ ppc_msgsnd_sync();
+ ppc_msgsnd(PPC_DBELL_MSGTYPE, 0, tag);
+@@ -84,7 +84,7 @@ void doorbell_exception(struct pt_regs *regs)
+
+ may_hard_irq_enable();
+
+- kvmppc_set_host_ipi(smp_processor_id(), 0);
++ kvmppc_clear_host_ipi(smp_processor_id());
+ __this_cpu_inc(irq_stat.doorbell_irqs);
+
+ smp_ipi_demux_relaxed(); /* already performed the barrier */
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
+index 4d2ec77d806c..287d5911df0f 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
+@@ -58,7 +58,7 @@ static inline void icp_send_hcore_msg(int hcore, struct kvm_vcpu *vcpu)
+ hcpu = hcore << threads_shift;
+ kvmppc_host_rm_ops_hv->rm_core[hcore].rm_data = vcpu;
+ smp_muxed_ipi_set_message(hcpu, PPC_MSG_RM_HOST_ACTION);
+- kvmppc_set_host_ipi(hcpu, 1);
++ kvmppc_set_host_ipi(hcpu);
+ smp_mb();
+ kvmhv_rm_send_ipi(hcpu);
+ }
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 94cd96b9b7bb..fbd6e6b7bbf2 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -193,7 +193,7 @@ static void pnv_smp_cpu_kill_self(void)
+ * for coming online, which are handled via
+ * generic_check_cpu_restart() calls.
+ */
+- kvmppc_set_host_ipi(cpu, 0);
++ kvmppc_clear_host_ipi(cpu);
+
+ srr1 = pnv_cpu_offline(cpu);
+
+diff --git a/arch/powerpc/sysdev/xics/icp-native.c b/arch/powerpc/sysdev/xics/icp-native.c
+index 485569ff7ef1..7d13d2ef5a90 100644
+--- a/arch/powerpc/sysdev/xics/icp-native.c
++++ b/arch/powerpc/sysdev/xics/icp-native.c
+@@ -140,7 +140,7 @@ static unsigned int icp_native_get_irq(void)
+
+ static void icp_native_cause_ipi(int cpu)
+ {
+- kvmppc_set_host_ipi(cpu, 1);
++ kvmppc_set_host_ipi(cpu);
+ icp_native_set_qirr(cpu, IPI_PRIORITY);
+ }
+
+@@ -179,7 +179,7 @@ void icp_native_flush_interrupt(void)
+ if (vec == XICS_IPI) {
+ /* Clear pending IPI */
+ int cpu = smp_processor_id();
+- kvmppc_set_host_ipi(cpu, 0);
++ kvmppc_clear_host_ipi(cpu);
+ icp_native_set_qirr(cpu, 0xff);
+ } else {
+ pr_err("XICS: hw interrupt 0x%x to offline cpu, disabling\n",
+@@ -200,7 +200,7 @@ static irqreturn_t icp_native_ipi_action(int irq, void *dev_id)
+ {
+ int cpu = smp_processor_id();
+
+- kvmppc_set_host_ipi(cpu, 0);
++ kvmppc_clear_host_ipi(cpu);
+ icp_native_set_qirr(cpu, 0xff);
+
+ return smp_ipi_demux();
+diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c
+index 8bb8dd7dd6ad..68fd2540b093 100644
+--- a/arch/powerpc/sysdev/xics/icp-opal.c
++++ b/arch/powerpc/sysdev/xics/icp-opal.c
+@@ -126,7 +126,7 @@ static void icp_opal_cause_ipi(int cpu)
+ {
+ int hw_cpu = get_hard_smp_processor_id(cpu);
+
+- kvmppc_set_host_ipi(cpu, 1);
++ kvmppc_set_host_ipi(cpu);
+ opal_int_set_mfrr(hw_cpu, IPI_PRIORITY);
+ }
+
+@@ -134,7 +134,7 @@ static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id)
+ {
+ int cpu = smp_processor_id();
+
+- kvmppc_set_host_ipi(cpu, 0);
++ kvmppc_clear_host_ipi(cpu);
+ opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff);
+
+ return smp_ipi_demux();
+@@ -157,7 +157,7 @@ void icp_opal_flush_interrupt(void)
+ if (vec == XICS_IPI) {
+ /* Clear pending IPI */
+ int cpu = smp_processor_id();
+- kvmppc_set_host_ipi(cpu, 0);
++ kvmppc_clear_host_ipi(cpu);
+ opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff);
+ } else {
+ pr_err("XICS: hw interrupt 0x%x to offline cpu, "
+--
+2.23.0
+
diff --git a/patches.suse/net-ibmvnic-Fix-EOI-when-running-in-XIVE-mode.patch b/patches.suse/net-ibmvnic-Fix-EOI-when-running-in-XIVE-mode.patch
new file mode 100644
index 0000000000..9ceee8b4c0
--- /dev/null
+++ b/patches.suse/net-ibmvnic-Fix-EOI-when-running-in-XIVE-mode.patch
@@ -0,0 +1,53 @@
+From 11d49ce9f7946dfed4dcf5dbde865c78058b50ab Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>
+Date: Fri, 11 Oct 2019 07:52:54 +0200
+Subject: [PATCH] net/ibmvnic: Fix EOI when running in XIVE mode.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+References: bsc#1089644, ltc#166495, ltc#165544, git-fixes
+Patch-mainline: queued
+Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net
+Git-commit: 11d49ce9f7946dfed4dcf5dbde865c78058b50ab
+
+pSeries machines on POWER9 processors can run with the XICS (legacy)
+interrupt mode or with the XIVE exploitation interrupt mode. These
+interrupt contollers have different interfaces for interrupt
+management : XICS uses hcalls and XIVE loads and stores on a page.
+H_EOI being a XICS interface the enable_scrq_irq() routine can fail
+when the machine runs in XIVE mode.
+
+Fix that by calling the EOI handler of the interrupt chip.
+
+Fixes: f23e0643cd0b ("ibmvnic: Clear pending interrupt after device reset")
+Signed-off-by: C├ędric Le Goater <clg@kaod.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Acked-by: Michal Suchanek <msuchanek@suse.de>
+---
+ drivers/net/ethernet/ibm/ibmvnic.c | 8 +++-----
+ 1 file changed, 3 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 2b073a3c0b84..f59d9a8e35e2 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2878,12 +2878,10 @@ static int enable_scrq_irq(struct ibmvnic_adapter *adapter,
+
+ if (test_bit(0, &adapter->resetting) &&
+ adapter->reset_reason == VNIC_RESET_MOBILITY) {
+- u64 val = (0xff000000) | scrq->hw_irq;
++ struct irq_desc *desc = irq_to_desc(scrq->irq);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+- rc = plpar_hcall_norets(H_EOI, val);
+- if (rc)
+- dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n",
+- val, rc);
++ chip->irq_eoi(&desc->irq_data);
+ }
+
+ rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,
+--
+2.23.0
+
diff --git a/patches.suse/x86-mm-use-write_once-when-setting-ptes.patch b/patches.suse/x86-mm-use-write_once-when-setting-ptes.patch
new file mode 100644
index 0000000000..c0f2b3b380
--- /dev/null
+++ b/patches.suse/x86-mm-use-write_once-when-setting-ptes.patch
@@ -0,0 +1,142 @@
+From: Nadav Amit <namit@vmware.com>
+Date: Sun, 2 Sep 2018 11:14:50 -0700
+Subject: x86/mm: Use WRITE_ONCE() when setting PTEs
+Git-commit: 9bc4f28af75a91aea0ae383f50b0a430c4509303
+Patch-mainline: v4.19-rc3
+References: bsc#1114279
+
+When page-table entries are set, the compiler might optimize their
+assignment by using multiple instructions to set the PTE. This might
+turn into a security hazard if the user somehow manages to use the
+interim PTE. L1TF does not make our lives easier, making even an interim
+non-present PTE a security hazard.
+
+Using WRITE_ONCE() to set PTEs and friends should prevent this potential
+security hazard.
+
+I skimmed the differences in the binary with and without this patch. The
+differences are (obviously) greater when CONFIG_PARAVIRT=n as more
+code optimizations are possible. For better and worse, the impact on the
+binary with this patch is pretty small. Skimming the code did not cause
+anything to jump out as a security hazard, but it seems that at least
+move_soft_dirty_pte() caused set_pte_at() to use multiple writes.
+
+Signed-off-by: Nadav Amit <namit@vmware.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: Andi Kleen <ak@linux.intel.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: Sean Christopherson <sean.j.christopherson@intel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/20180902181451.80520-1-namit@vmware.com
+
+Acked-by: Borislav Petkov <bp@suse.de>
+---
+ arch/x86/include/asm/pgtable_64.h | 20 ++++++++++----------
+ arch/x86/mm/pgtable.c | 8 ++++----
+ 2 files changed, 14 insertions(+), 14 deletions(-)
+
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -53,15 +53,15 @@ struct mm_struct;
+ void set_pte_vaddr_p4d(p4d_t *p4d_page, unsigned long vaddr, pte_t new_pte);
+ void set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte);
+
+-static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep)
++static inline void native_set_pte(pte_t *ptep, pte_t pte)
+ {
+- *ptep = native_make_pte(0);
++ WRITE_ONCE(*ptep, pte);
+ }
+
+-static inline void native_set_pte(pte_t *ptep, pte_t pte)
++static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep)
+ {
+- *ptep = pte;
++ native_set_pte(ptep, native_make_pte(0));
+ }
+
+ static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
+@@ -71,7 +71,7 @@ static inline void native_set_pte_atomic
+
+ static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd)
+ {
+- *pmdp = pmd;
++ WRITE_ONCE(*pmdp, pmd);
+ }
+
+ static inline void native_pmd_clear(pmd_t *pmd)
+@@ -107,7 +107,7 @@ static inline pmd_t native_pmdp_get_and_
+
+ static inline void native_set_pud(pud_t *pudp, pud_t pud)
+ {
+- *pudp = pud;
++ WRITE_ONCE(*pudp, pud);
+ }
+
+ static inline void native_pud_clear(pud_t *pud)
+@@ -219,7 +219,7 @@ static inline void native_set_p4d(p4d_t
+ #if defined(CONFIG_PAGE_TABLE_ISOLATION) && !defined(CONFIG_X86_5LEVEL)
+ p4dp->pgd = pti_set_user_pgd(&p4dp->pgd, p4d.pgd);
+ #else
+- *p4dp = p4d;
++ WRITE_ONCE(*p4dp, p4d);
+ #endif
+ }
+
+@@ -235,9 +235,9 @@ static inline void native_p4d_clear(p4d_
+ static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
+ {
+ #ifdef CONFIG_PAGE_TABLE_ISOLATION
+- *pgdp = pti_set_user_pgd(pgdp, pgd);
++ WRITE_ONCE(*pgdp, pti_set_user_pgd(pgdp, pgd));
+ #else
+- *pgdp = pgd;
++ WRITE_ONCE(*pgdp, pgd);
+ #endif
+ }
+
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -259,7 +259,7 @@ static void pgd_mop_up_pmds(struct mm_st
+ if (pgd_val(pgd) != 0) {
+ pmd_t *pmd = (pmd_t *)pgd_page_vaddr(pgd);
+
+- pgdp[i] = native_make_pgd(0);
++ pgd_clear(&pgdp[i]);
+
+ paravirt_release_pmd(pgd_val(pgd) >> PAGE_SHIFT);
+ pmd_free(mm, pmd);
+@@ -429,7 +429,7 @@ int ptep_set_access_flags(struct vm_area
+ int changed = !pte_same(*ptep, entry);
+
+ if (changed && dirty) {
+- *ptep = entry;
++ set_pte(ptep, entry);
+ pte_update(vma->vm_mm, address, ptep);
+ }
+
+@@ -446,7 +446,7 @@ int pmdp_set_access_flags(struct vm_area
+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+
+ if (changed && dirty) {
+- *pmdp = entry;
++ set_pmd(pmdp, entry);
+ /*
+ * We had a write-protection fault here and changed the pmd
+ * to to more permissive. No need to flush the TLB for that,
+@@ -466,7 +466,7 @@ int pudp_set_access_flags(struct vm_area
+ VM_BUG_ON(address & ~HPAGE_PUD_MASK);
+
+ if (changed && dirty) {
+- *pudp = entry;
++ set_pud(pudp, entry);
+ /*
+ * We had a write-protection fault here and changed the pud
+ * to to more permissive. No need to flush the TLB for that,
diff --git a/series.conf b/series.conf
index 988b404fb0..b56f63d647 100644
--- a/series.conf
+++ b/series.conf
@@ -19508,6 +19508,7 @@
patches.suse/x86-microcode-make-sure-boot_cpu_data-microcode-is-up-to-date
patches.suse/x86-microcode-update-the-new-microcode-revision-unconditionally
patches.suse/x86-process-don-t-mix-user-kernel-regs-in-64bit-_show_regs
+ patches.suse/x86-mm-use-write_once-when-setting-ptes.patch
patches.suse/iw_cxgb4-only-allow-1-flush-on-user-qps.patch
patches.suse/IB-ipoib-Avoid-a-race-condition-between-start_xmit-a.patch
patches.suse/bnxt_re-Fix-couple-of-memory-leaks-that-could-lead-t.patch
@@ -24741,6 +24742,7 @@
patches.suse/livepatch-nullify-obj-mod-in-klp_module_coming-s-error-path.patch
patches.suse/suse-hv-PCI-hv-Detect-and-fix-Hyper-V-PCI-domain-number-coll.patch
patches.suse/msft-hv-1947-PCI-hv-Use-bytes-4-and-5-from-instance-ID-as-the-PCI.patch
+ patches.suse/KVM-PPC-Book3S-HV-use-smp_mb-when-setting-clearing-h.patch
patches.suse/powerpc-pseries-Read-TLB-Block-Invalidate-Characteri.patch
patches.suse/powerpc-pseries-Call-H_BLOCK_REMOVE-when-supported.patch
patches.suse/powerpc-book3s64-mm-Don-t-do-tlbie-fixup-for-some-ha.patch
@@ -24759,6 +24761,9 @@
patches.suse/msft-hv-1948-scsi-storvsc-setup-1-1-mapping-between-hardware-queu.patch
patches.suse/0001-kernel-sysctl.c-do-not-override-max_threads-provided.patch
+ # davem/net
+ patches.suse/net-ibmvnic-Fix-EOI-when-running-in-XIVE-mode.patch
+
# jejb/scsi for-next
patches.suse/scsi-qla2xxx-Fix-Nport-ID-display-value.patch