commit 834125557e0a4e5afafee3caf79696078d0820ae Author: Sasha Levin Date: Sat Apr 23 16:46:40 2016 -0400 Linux 3.18.32 Signed-off-by: Sasha Levin commit e912c4abafa7d306539040b1a231d277fd750be8 Author: Eric Dumazet Date: Thu Sep 17 08:38:00 2015 -0700 tcp_cubic: do not set epoch_start in the future [ Upstream commit c2e7204d180f8efc80f27959ca9cf16fa17f67db ] Tracking idle time in bictcp_cwnd_event() is imprecise, as epoch_start is normally set at ACK processing time, not at send time. Doing a proper fix would need to add an additional state variable, and does not seem worth the trouble, given CUBIC bug has been there forever before Jana noticed it. Let's simply not set epoch_start in the future, otherwise bictcp_update() could overflow and CUBIC would again grow cwnd too fast. This was detected thanks to a packetdrill test Neal wrote that was flaky before applying this fix. Fixes: 30927520dbae ("tcp_cubic: better follow cubic curve after idle period") Signed-off-by: Eric Dumazet Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Cc: Jana Iyengar Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit 5d6226fe6abbaf576328d6cbc8c5473f32428d68 Author: Filipe Manana Date: Fri Jul 3 20:30:34 2015 +0100 Btrfs: fix list transaction->pending_ordered corruption [ Upstream commit d3efe08400317888f559bbedf0e42cd31575d0ef ] When we call btrfs_commit_transaction(), we splice the list "ordered" of our transaction handle into the transaction's "pending_ordered" list, but we don't re-initialize the "ordered" list of our transaction handle, this means it still points to the same elements it used to before the splice. Then we check if the current transaction's state is >= TRANS_STATE_COMMIT_START and if it is we end up calling btrfs_end_transaction() which simply splices again the "ordered" list of our handle into the transaction's "pending_ordered" list, leaving multiple pointers to the same ordered extents which results in list corruption when we are iterating, removing and freeing ordered extents at btrfs_wait_pending_ordered(), resulting in access to dangling pointers / use-after-free issues. Similarly, btrfs_end_transaction() can end up in some cases calling btrfs_commit_transaction(), and both did a list splice of the transaction handle's "ordered" list into the transaction's "pending_ordered" without re-initializing the handle's "ordered" list, resulting in exactly the same problem. This produces the following warning on a kernel with linked list debugging enabled: [109749.265416] ------------[ cut here ]------------ [109749.266410] WARNING: CPU: 7 PID: 324 at lib/list_debug.c:59 __list_del_entry+0x5a/0x98() [109749.267969] list_del corruption. prev->next should be ffff8800ba087e20, but was fffffff8c1f7c35d (...) [109749.287505] Call Trace: [109749.288135] [] dump_stack+0x4f/0x7b [109749.298080] [] ? console_unlock+0x356/0x3a2 [109749.331605] [] warn_slowpath_common+0xa1/0xbb [109749.334849] [] ? __list_del_entry+0x5a/0x98 [109749.337093] [] warn_slowpath_fmt+0x46/0x48 [109749.337847] [] __list_del_entry+0x5a/0x98 [109749.338678] [] btrfs_wait_pending_ordered+0x46/0xdb [btrfs] [109749.340145] [] ? __btrfs_run_delayed_items+0x149/0x163 [btrfs] [109749.348313] [] btrfs_commit_transaction+0x36b/0xa10 [btrfs] [109749.349745] [] ? trace_hardirqs_on+0xd/0xf [109749.350819] [] btrfs_sync_file+0x36f/0x3fc [btrfs] [109749.351976] [] vfs_fsync_range+0x8f/0x9e [109749.360341] [] vfs_fsync+0x1c/0x1e [109749.368828] [] do_fsync+0x34/0x4e [109749.369790] [] SyS_fsync+0x10/0x14 [109749.370925] [] system_call_fastpath+0x12/0x6f [109749.382274] ---[ end trace 48e0d07f7c03d95a ]--- On a non-debug kernel this leads to invalid memory accesses, causing a crash. Fix this by using list_splice_init() instead of list_splice() in btrfs_commit_transaction() and btrfs_end_transaction(). Cc: stable@vger.kernel.org Fixes: 50d9aa99bd35 ("Btrfs: make sure logged extents complete in the current transaction V3" Signed-off-by: Filipe Manana Reviewed-by: David Sterba Signed-off-by: Sasha Levin commit 95617b5e03996c9b76743c187d53849ae1c296a2 Author: Mike Galbraith Date: Fri Apr 22 20:38:23 2016 -0400 Correct backport of fa3c776 ("Thermal: Ignore invalid trip points") Backport of 81ad4276b505e987dd8ebbdf63605f92cd172b52 failed to adjust for intervening ->get_trip_temp() argument type change, thus causing stack protector to panic. drivers/thermal/thermal_core.c: In function ‘thermal_zone_device_register’: drivers/thermal/thermal_core.c:1569:41: warning: passing argument 3 of ‘tz->ops->get_trip_temp’ from incompatible pointer type [-Wincompatible-pointer-types] if (tz->ops->get_trip_temp(tz, count, &trip_temp)) ^ drivers/thermal/thermal_core.c:1569:41: note: expected ‘long unsigned int *’ but argument is of type ‘int *’ CC: #3.18,#4.1 Signed-off-by: Mike Galbraith commit e1ae108d141866e3ac73aa31575027271b8d42c4 Author: Bin Liu Date: Mon Jan 26 16:22:06 2015 -0600 usb: musb: cppi41: correct the macro name EP_MODE_AUTOREG_* [ Upstream commit 0149b07a9e28b0d8bd2fc1c238ffe7d530c2673f ] The macro EP_MODE_AUTOREG_* should be called EP_MODE_AUTOREQ_*, as they are used for register AUTOREQ. Signed-off-by: Bin Liu Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit 3aacca81cca936538810ae97612f9132d38ba7ac Author: Eric Dumazet Date: Wed Sep 9 21:55:07 2015 -0700 tcp_cubic: better follow cubic curve after idle period [ Upstream commit 30927520dbae297182990bb21d08762bcc35ce1d ] Jana Iyengar found an interesting issue on CUBIC : The epoch is only updated/reset initially and when experiencing losses. The delta "t" of now - epoch_start can be arbitrary large after app idle as well as the bic_target. Consequentially the slope (inverse of ca->cnt) would be really large, and eventually ca->cnt would be lower-bounded in the end to 2 to have delayed-ACK slow-start behavior. This particularly shows up when slow_start_after_idle is disabled as a dangerous cwnd inflation (1.5 x RTT) after few seconds of idle time. Jana initial fix was to reset epoch_start if app limited, but Neal pointed out it would ask the CUBIC algorithm to recalculate the curve so that we again start growing steeply upward from where cwnd is now (as CUBIC does just after a loss). Ideally we'd want the cwnd growth curve to be the same shape, just shifted later in time by the amount of the idle period. Reported-by: Jana Iyengar Signed-off-by: Eric Dumazet Signed-off-by: Yuchung Cheng Signed-off-by: Neal Cardwell Cc: Stephen Hemminger Cc: Sangtae Ha Cc: Lawrence Brakmo Signed-off-by: David S. Miller Signed-off-by: Sasha Levin commit c1ffc7c90954257b66b17eff3a0732dfe449f853 Author: Robert Dobrowolski Date: Thu Mar 24 03:30:07 2016 -0700 usb: hcd: out of bounds access in for_each_companion [ Upstream commit e86103a75705c7c530768f4ffaba74cf382910f2 ] On BXT platform Host Controller and Device Controller figure as same PCI device but with different device function. HCD should not pass data to Device Controller but only to Host Controllers. Checking if companion device is Host Controller, otherwise skip. Cc: Signed-off-by: Robert Dobrowolski Acked-by: Alan Stern Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit e6102e3b2f80d5af732587b9f254619a608cbf39 Author: Hans de Goede Date: Tue Apr 12 12:27:09 2016 +0200 USB: uas: Add a new NO_REPORT_LUNS quirk [ Upstream commit 1363074667a6b7d0507527742ccd7bbed5e3ceaa ] Add a new NO_REPORT_LUNS quirk and set it for Seagate drives with an usb-id of: 0bc2:331a, as these will fail to respond to a REPORT_LUNS command. Cc: stable@vger.kernel.org Reported-and-tested-by: David Webb Signed-off-by: Hans de Goede Acked-by: Alan Stern Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit ef6d6f5865629591d63561fd12ae01931c77a80a Author: Mathias Nyman Date: Fri Apr 8 16:25:10 2016 +0300 xhci: fix 10 second timeout on removal of PCI hotpluggable xhci controllers [ Upstream commit 98d74f9ceaefc2b6c4a6440050163a83be0abede ] PCI hotpluggable xhci controllers such as some Alpine Ridge solutions will remove the xhci controller from the PCI bus when the last USB device is disconnected. Add a flag to indicate that the host is being removed to avoid queueing configure_endpoint commands for the dropped endpoints. For PCI hotplugged controllers this will prevent 5 second command timeouts For static xhci controllers the configure_endpoint command is not needed in the removal case as everything will be returned, freed, and the controller is reset. For now the flag is only set for PCI connected host controllers. Cc: Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 797648b968a85d6d79253f5899f8c0de5e942ac5 Author: Roger Quadros Date: Fri May 29 17:01:49 2015 +0300 usb: xhci: fix xhci locking up during hcd remove [ Upstream commit ad6b1d914a9e07f3b9a9ae3396f3c840d0070539 ] The problem seems to be that if a new device is detected while we have already removed the shared HCD, then many of the xhci operations (e.g. xhci_alloc_dev(), xhci_setup_device()) hang as command never completes. I don't think XHCI can operate without the shared HCD as we've already called xhci_halt() in xhci_only_stop_hcd() when shared HCD goes away. We need to prevent new commands from being queued not only when HCD is dying but also when HCD is halted. The following lockup was detected while testing the otg state machine. [ 178.199951] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.205799] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1 [ 178.214458] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220f04c hci version 0x100 quirks 0x00010010 [ 178.223619] xhci-hcd xhci-hcd.0.auto: irq 400, io mem 0x48890000 [ 178.230677] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 178.237796] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.245358] usb usb1: Product: xHCI Host Controller [ 178.250483] usb usb1: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.257783] usb usb1: SerialNumber: xhci-hcd.0.auto [ 178.267014] hub 1-0:1.0: USB hub found [ 178.272108] hub 1-0:1.0: 1 port detected [ 178.278371] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.284171] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2 [ 178.294038] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003 [ 178.301183] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.308776] usb usb2: Product: xHCI Host Controller [ 178.313902] usb usb2: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.321222] usb usb2: SerialNumber: xhci-hcd.0.auto [ 178.329061] hub 2-0:1.0: USB hub found [ 178.333126] hub 2-0:1.0: 1 port detected [ 178.567585] dwc3 48890000.usb: usb_otg_start_host 0 [ 178.572707] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 178.578064] usb usb2: USB disconnect, device number 1 [ 178.586565] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered [ 178.592585] xhci-hcd xhci-hcd.0.auto: remove, state 1 [ 178.597924] usb usb1: USB disconnect, device number 1 [ 178.603248] usb 1-1: new high-speed USB device number 2 using xhci-hcd [ 190.597337] INFO: task kworker/u4:0:6 blocked for more than 10 seconds. [ 190.604273] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.610228] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.618443] kworker/u4:0 D c05c0ac0 0 6 2 0x00000000 [ 190.625120] Workqueue: usb_otg usb_otg_work [ 190.629533] [] (__schedule) from [] (schedule+0x34/0x98) [ 190.636915] [] (schedule) from [] (schedule_preempt_disabled+0xc/0x10) [ 190.645591] [] (schedule_preempt_disabled) from [] (mutex_lock_nested+0x1ac/0x3fc) [ 190.655353] [] (mutex_lock_nested) from [] (usb_disconnect+0x3c/0x208) [ 190.664043] [] (usb_disconnect) from [] (_usb_remove_hcd+0x98/0x1d8) [ 190.672535] [] (_usb_remove_hcd) from [] (usb_otg_start_host+0x50/0xf4) [ 190.681299] [] (usb_otg_start_host) from [] (otg_set_protocol+0x5c/0xd0) [ 190.690153] [] (otg_set_protocol) from [] (otg_set_state+0x170/0xbfc) [ 190.698735] [] (otg_set_state) from [] (otg_statemachine+0x12c/0x470) [ 190.707326] [] (otg_statemachine) from [] (process_one_work+0x1b4/0x4a0) [ 190.716162] [] (process_one_work) from [] (worker_thread+0x154/0x44c) [ 190.724742] [] (worker_thread) from [] (kthread+0xd4/0xf0) [ 190.732328] [] (kthread) from [] (ret_from_fork+0x14/0x24) [ 190.739898] 5 locks held by kworker/u4:0/6: [ 190.744274] #0: ("%s""usb_otg"){.+.+.+}, at: [] process_one_work+0x124/0x4a0 [ 190.752799] #1: ((&otgd->work)){+.+.+.}, at: [] process_one_work+0x124/0x4a0 [ 190.761326] #2: (&otgd->fsm.lock){+.+.+.}, at: [] otg_statemachine+0x18/0x470 [ 190.769934] #3: (usb_bus_list_lock){+.+.+.}, at: [] _usb_remove_hcd+0x90/0x1d8 [ 190.778635] #4: (&dev->mutex){......}, at: [] usb_disconnect+0x3c/0x208 [ 190.786700] INFO: task kworker/1:0:14 blocked for more than 10 seconds. [ 190.793633] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.799567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.807783] kworker/1:0 D c05c0ac0 0 14 2 0x00000000 [ 190.814457] Workqueue: usb_hub_wq hub_event [ 190.818866] [] (__schedule) from [] (schedule+0x34/0x98) [ 190.826252] [] (schedule) from [] (schedule_timeout+0x13c/0x1ec) [ 190.834377] [] (schedule_timeout) from [] (wait_for_common+0xbc/0x150) [ 190.843062] [] (wait_for_common) from [] (xhci_setup_device+0x164/0x5cc [xhci_hcd]) [ 190.852986] [] (xhci_setup_device [xhci_hcd]) from [] (hub_port_init+0x3f4/0xb10) [ 190.862667] [] (hub_port_init) from [] (hub_event+0x704/0x1018) [ 190.870704] [] (hub_event) from [] (process_one_work+0x1b4/0x4a0) [ 190.878919] [] (process_one_work) from [] (worker_thread+0x154/0x44c) [ 190.887503] [] (worker_thread) from [] (kthread+0xd4/0xf0) [ 190.895076] [] (kthread) from [] (ret_from_fork+0x14/0x24) [ 190.902650] 5 locks held by kworker/1:0/14: [ 190.907023] #0: ("usb_hub_wq"){.+.+.+}, at: [] process_one_work+0x124/0x4a0 [ 190.915454] #1: ((&hub->events)){+.+.+.}, at: [] process_one_work+0x124/0x4a0 [ 190.924070] #2: (&dev->mutex){......}, at: [] hub_event+0x30/0x1018 [ 190.931768] #3: (&port_dev->status_lock){+.+.+.}, at: [] hub_event+0x6f0/0x1018 [ 190.940558] #4: (&bus->usb_address0_mutex){+.+.+.}, at: [] hub_port_init+0x58/0xb10 Signed-off-by: Roger Quadros Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit b37a6dfe19c77bbdf063679ab3b28c8b13282189 Author: Lu Baolu Date: Fri Apr 8 16:25:09 2016 +0300 usb: xhci: fix wild pointers in xhci_mem_cleanup [ Upstream commit 71504062a7c34838c3fccd92c447f399d3cb5797 ] This patch fixes some wild pointers produced by xhci_mem_cleanup. These wild pointers will cause system crash if xhci_mem_cleanup() is called twice. Reported-and-tested-by: Pengcheng Li Signed-off-by: Lu Baolu Cc: stable@vger.kernel.org Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit f1aa53dc77f70a97a879b97990fb07b1c6e8ef57 Author: Yoshihiro Shimoda Date: Fri Apr 8 16:25:07 2016 +0300 usb: host: xhci: add a new quirk XHCI_NO_64BIT_SUPPORT [ Upstream commit 0a380be8233dbf8dd20795b801c5d5d5ef3992f7 ] On some xHCI controllers (e.g. R-Car SoCs), the AC64 bit (bit 0) of HCCPARAMS1 is set to 1. However, the xHCs don't support 64-bit address memory pointers actually. So, in this case, this driver should call dma_set_coherent_mask(dev, DMA_BIT_MASK(32)) in xhci_gen_setup(). Otherwise, the xHCI controller will be died after a usb device is connected if it runs on above 4GB physical memory environment. So, this patch adds a new quirk XHCI_NO_64BIT_SUPPORT to resolve such an issue. Cc: Signed-off-by: Yoshihiro Shimoda Reviewed-by: Felipe Balbi Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 73554d82eea481660e73673fd6284297c955fa49 Author: Mathias Nyman Date: Fri Apr 8 16:25:06 2016 +0300 xhci: resume USB 3 roothub first [ Upstream commit 671ffdff5b13314b1fc65d62cf7604b873fb5dc4 ] Give USB3 devices a better chance to enumerate at USB 3 speeds if they are connected to a suspended host. Solves an issue with NEC uPD720200 host hanging when partially enumerating a USB3 device as USB2 after host controller runtime resume. Cc: Tested-by: Mike Murdoch Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 4b76bb498f80d40a395f32fb1ff1b9788404cee1 Author: Rafal Redzimski Date: Fri Apr 8 16:25:05 2016 +0300 usb: xhci: applying XHCI_PME_STUCK_QUIRK to Intel BXT B0 host [ Upstream commit 0d46faca6f887a849efb07c1655b5a9f7c288b45 ] Broxton B0 also requires XHCI_PME_STUCK_QUIRK. Adding PCI device ID for Broxton B and adding to quirk. Cc: Signed-off-by: Rafal Redzimski Signed-off-by: Robert Dobrowolski Signed-off-by: Mathias Nyman Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 73c3959839a1c7a48d09a5af87ba958fc11da9ed Author: Rui Salvaterra Date: Sat Apr 9 22:05:34 2016 +0100 lib: lz4: fixed zram with lz4 on big endian machines [ Upstream commit 3e26a691fe3fe1e02a76e5bab0c143ace4b137b4 ] Based on Sergey's test patch [1], this fixes zram with lz4 compression on big endian cpus. Note that the 64-bit preprocessor test is not a cleanup, it's part of the fix, since those identifiers are bogus (for example, __ppc64__ isn't defined anywhere else in the kernel, which means we'd fall into the 32-bit definitions on ppc64). Tested on ppc64 with no regression on x86_64. [1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4 Cc: stable@vger.kernel.org Suggested-by: Sergey Senozhatsky Signed-off-by: Rui Salvaterra Reviewed-by: Sergey Senozhatsky Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 4da4d18f9896db55cf7217d384b36d5ac78a6646 Author: Andy Shevchenko Date: Fri Apr 8 16:22:17 2016 +0300 dmaengine: dw: fix master selection [ Upstream commit 3fe6409c23e2bee4b2b1b6d671d2da8daa15271c ] The commit 895005202987 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") cleaned up the code to avoid usage of depricated slave_id member of generic slave configuration. Meanwhile it broke the master selection by removing important call to dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from custom slave configuration to the internal channel structure. Everything works until now since there is no customized connection of DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use. The configurations where 2 masters are connected to the different masters are not working anymore. We are expecting one user of such configuration and need to select masters properly. Besides that it is obviously a performance regression since only one master is in use in multi-master configuration. Select masters in accordance with what user asked for. Keep this patch in a form more suitable for back porting. We are safe to take necessary data in ->device_alloc_chan_resources() because we don't support generic slave configuration embedded into custom one, and thus the only way to provide such is to use the parameter to a filter function which is called exactly before channel resource allocation. While here, replase BUG_ON to less noisy dev_warn() and prevent channel allocation in case of error. Fixes: 895005202987 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") Cc: stable@vger.kernel.org Signed-off-by: Andy Shevchenko Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin commit a9d7a4f2dfa79a910cd98922b8f76398fa75d0e3 Author: Kailang Yang Date: Tue Apr 12 10:55:03 2016 +0800 ALSA: usb-audio: Skip volume controls triggers hangup on Dell USB Dock [ Upstream commit adcdd0d5a1cb779f6d455ae70882c19c527627a8 ] This is Dell usb dock audio workaround. It was fixed the master volume keep lower. [Some background: the patch essentially skips the controls of a couple of FU volumes. Although the firmware exposes the dB and the value information via the usb descriptor, changing the values (we set the min volume as default) screws up the device. Although this has been fixed in the newer firmware, the devices are shipped with the old firmware, thus we need the workaround in the driver side. -- tiwai] Signed-off-by: Kailang Yang Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit 9cb8f3d97cc04a226774c1b486f6cec8a39dbe43 Author: Hui Wang Date: Fri Apr 1 11:00:15 2016 +0800 ALSA: hda - fix front mic problem for a HP desktop [ Upstream commit e549d190f7b5f94e9ab36bd965028112914d010d ] The front mic jack (pink color) can't detect any plug or unplug. After applying this fix, both detecting function and recording function work well. BugLink: https://bugs.launchpad.net/bugs/1564712 Cc: stable@vger.kernel.org Signed-off-by: Hui Wang Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit 9081260396898ff62cdbf1a829b290779f246c34 Author: David Matlack Date: Wed Mar 30 12:24:47 2016 -0700 kvm: x86: do not leak guest xcr0 into host interrupt handlers [ Upstream commit fc5b7f3bf1e1414bd4e91db6918c85ace0c873a5 ] An interrupt handler that uses the fpu can kill a KVM VM, if it runs under the following conditions: - the guest's xcr0 register is loaded on the cpu - the guest's fpu context is not loaded - the host is using eagerfpu Note that the guest's xcr0 register and fpu context are not loaded as part of the atomic world switch into "guest mode". They are loaded by KVM while the cpu is still in "host mode". Usage of the fpu in interrupt context is gated by irq_fpu_usable(). The interrupt handler will look something like this: if (irq_fpu_usable()) { kernel_fpu_begin(); [... code that uses the fpu ...] kernel_fpu_end(); } As long as the guest's fpu is not loaded and the host is using eager fpu, irq_fpu_usable() returns true (interrupted_kernel_fpu_idle() returns true). The interrupt handler proceeds to use the fpu with the guest's xcr0 live. kernel_fpu_begin() saves the current fpu context. If this uses XSAVE[OPT], it may leave the xsave area in an undesirable state. According to the SDM, during XSAVE bit i of XSTATE_BV is not modified if bit i is 0 in xcr0. So it's possible that XSTATE_BV[i] == 1 and xcr0[i] == 0 following an XSAVE. kernel_fpu_end() restores the fpu context. Now if any bit i in XSTATE_BV == 1 while xcr0[i] == 0, XRSTOR generates a #GP. The fault is trapped and SIGSEGV is delivered to the current process. Only pre-4.2 kernels appear to be vulnerable to this sequence of events. Commit 653f52c ("kvm,x86: load guest FPU context more eagerly") from 4.2 forces the guest's fpu to always be loaded on eagerfpu hosts. This patch fixes the bug by keeping the host's xcr0 loaded outside of the interrupts-disabled region where KVM switches into guest mode. Cc: stable@vger.kernel.org Suggested-by: Andy Lutomirski Signed-off-by: David Matlack [Move load after goto cancel_injection. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin commit 9bd5af8457978fa4979f7d1fe4c86d8aef2c651e Author: Helge Deller Date: Fri Apr 8 18:32:52 2016 +0200 parisc: Unbreak handling exceptions from kernel modules [ Upstream commit 2ef4dfd9d9f288943e249b78365a69e3ea3ec072 ] Handling exceptions from modules never worked on parisc. It was just masked by the fact that exceptions from modules don't happen during normal use. When a module triggers an exception in get_user() we need to load the main kernel dp value before accessing the exception_data structure, and afterwards restore the original dp value of the module on exit. Noticed-by: Mikulas Patocka Signed-off-by: Helge Deller Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin commit e69ea04639309f34ad17c0e443fc3bea15c7a5fb Author: Christoph Lameter Date: Fri Dec 12 16:58:47 2014 -0800 parisc: percpu: update comments referring to __get_cpu_var [ Upstream commit 6ddb798f0248e3460c2dce76af5cb30a980efccd ] __get_cpu_var was removed. Update comments to refer to this_cpu_ptr() instead. Signed-off-by: Christoph Lameter Cc: "James E.J. Bottomley" Cc: Tejun Heo Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin commit 2680e055d991165d289149bd090a0ddf24bd88cb Author: Helge Deller Date: Fri Apr 8 18:18:48 2016 +0200 parisc: Fix kernel crash with reversed copy_from_user() [ Upstream commit ef72f3110d8b19f4c098a0bff7ed7d11945e70c6 ] The kernel module testcase (lib/test_user_copy.c) exhibited a kernel crash on parisc if the parameters for copy_from_user were reversed ("illegal reversed copy_to_user" testcase). Fix this potential crash by checking the fault handler if the faulting address is in the exception table. Signed-off-by: Helge Deller Cc: stable@vger.kernel.org Cc: Kees Cook Signed-off-by: Sasha Levin commit 7d96d5947bb6a5704e15789140bcd572cdb14d6d Author: Helge Deller Date: Fri Apr 8 18:11:33 2016 +0200 parisc: Avoid function pointers for kernel exception routines [ Upstream commit e3893027a300927049efc1572f852201eb785142 ] We want to avoid the kernel module loader to create function pointers for the kernel fixup routines of get_user() and put_user(). Changing the external reference from function type to int type fixes this. This unbreaks exception handling for get_user() and put_user() when called from a kernel module. Signed-off-by: Helge Deller Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin commit ff603cca6d4fc913ee2766cedbc5dedb73d1a469 Author: Yong Li Date: Wed Mar 30 14:49:14 2016 +0800 gpio: pca953x: Use correct u16 value for register word write [ Upstream commit 9b8e3ec34318663affced3c14d960e78d760dd9a ] The current implementation only uses the first byte in val, the second byte is always 0. Change it to use cpu_to_le16 to write the two bytes into the register Cc: stable@vger.kernel.org Signed-off-by: Yong Li Reviewed-by: Phil Reid Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin commit 917871b96120f307588b19efd7b70632acab42fc Author: Bjørn Mork Date: Thu Apr 7 12:09:17 2016 +0200 USB: option: add "D-Link DWM-221 B1" device id [ Upstream commit d48d5691ebf88a15d95ba96486917ffc79256536 ] Thomas reports: "Windows: 00 diagnostics 01 modem 02 at-port 03 nmea 04 nic Linux: T: Bus=02 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 4 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2001 ProdID=7e19 Rev=02.32 S: Manufacturer=Mobile Connect S: Product=Mobile Connect S: SerialNumber=0123456789ABCDEF C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan I: If#= 5 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage" Reported-by: Thomas Schäfer Cc: Signed-off-by: Bjørn Mork Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit 2af56ded21b571667c7961d079a23ff8d4934288 Author: Martyn Welch Date: Tue Mar 29 17:47:29 2016 +0100 USB: serial: cp210x: Adding GE Healthcare Device ID [ Upstream commit cddc9434e3dcc37a85c4412fb8e277d3a582e456 ] The CP2105 is used in the GE Healthcare Remote Alarm Box, with the Manufacturer ID of 0x1901 and Product ID of 0x0194. Signed-off-by: Martyn Welch Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit 9ea952837935e136af745a3af5802fcdecf60314 Author: Josh Boyer Date: Thu Mar 10 09:48:52 2016 -0500 USB: serial: ftdi_sio: Add support for ICP DAS I-756xU devices [ Upstream commit ea6db90e750328068837bed34cb1302b7a177339 ] A Fedora user reports that the ftdi_sio driver works properly for the ICP DAS I-7561U device. Further, the user manual for these devices instructs users to load the driver and add the ids using the sysfs interface. Add support for these in the driver directly so that the devices work out of the box instead of needing manual configuration. Reported-by: CC: stable Signed-off-by: Josh Boyer Signed-off-by: Johan Hovold Signed-off-by: Sasha Levin commit 048605483fbdd1e77ead32a7cd7b95cc17eaaf0e Author: Filipe Manana Date: Wed Mar 30 23:37:21 2016 +0100 Btrfs: fix file/data loss caused by fsync after rename and new inode [ Upstream commit 56f23fdbb600e6087db7b009775b95ce07cc3195 ] If we rename an inode A (be it a file or a directory), create a new inode B with the old name of inode A and under the same parent directory, fsync inode B and then power fail, at log tree replay time we end up removing inode A completely. If inode A is a directory then all its files are gone too. Example scenarios where this happens: This is reproducible with the following steps, taken from a couple of test cases written for fstests which are going to be submitted upstream soon: # Scenario 1 mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir -p /mnt/a/x echo "hello" > /mnt/a/x/foo echo "world" > /mnt/a/x/bar sync mv /mnt/a/x /mnt/a/y mkdir /mnt/a/x xfs_io -c fsync /mnt/a/x The next time the fs is mounted, log tree replay happens and the directory "y" does not exist nor do the files "foo" and "bar" exist anywhere (neither in "y" nor in "x", nor the root nor anywhere). # Scenario 2 mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir /mnt/a echo "hello" > /mnt/a/foo sync mv /mnt/a/foo /mnt/a/bar echo "world" > /mnt/a/foo xfs_io -c fsync /mnt/a/foo The next time the fs is mounted, log tree replay happens and the file "bar" does not exists anymore. A file with the name "foo" exists and it matches the second file we created. Another related problem that does not involve file/data loss is when a new inode is created with the name of a deleted snapshot and we fsync it: mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir /mnt/testdir btrfs subvolume snapshot /mnt /mnt/testdir/snap btrfs subvolume delete /mnt/testdir/snap rmdir /mnt/testdir mkdir /mnt/testdir xfs_io -c fsync /mnt/testdir # or fsync some file inside /mnt/testdir The next time the fs is mounted the log replay procedure fails because it attempts to delete the snapshot entry (which has dir item key type of BTRFS_ROOT_ITEM_KEY) as if it were a regular (non-root) entry, resulting in the following error that causes mount to fail: [52174.510532] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257 [52174.512570] ------------[ cut here ]------------ [52174.513278] WARNING: CPU: 12 PID: 28024 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x178/0x351 [btrfs]() [52174.514681] BTRFS: Transaction aborted (error -2) [52174.515630] Modules linked in: btrfs dm_flakey dm_mod overlay crc32c_generic ppdev xor raid6_pq acpi_cpufreq parport_pc tpm_tis sg parport tpm evdev i2c_piix4 proc [52174.521568] CPU: 12 PID: 28024 Comm: mount Tainted: G W 4.5.0-rc6-btrfs-next-27+ #1 [52174.522805] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014 [52174.524053] 0000000000000000 ffff8801df2a7710 ffffffff81264e93 ffff8801df2a7758 [52174.524053] 0000000000000009 ffff8801df2a7748 ffffffff81051618 ffffffffa03591cd [52174.524053] 00000000fffffffe ffff88015e6e5000 ffff88016dbc3c88 ffff88016dbc3c88 [52174.524053] Call Trace: [52174.524053] [] dump_stack+0x67/0x90 [52174.524053] [] warn_slowpath_common+0x99/0xb2 [52174.524053] [] ? __btrfs_unlink_inode+0x178/0x351 [btrfs] [52174.524053] [] warn_slowpath_fmt+0x48/0x50 [52174.524053] [] __btrfs_unlink_inode+0x178/0x351 [btrfs] [52174.524053] [] ? iput+0xb0/0x284 [52174.524053] [] btrfs_unlink_inode+0x1c/0x3d [btrfs] [52174.524053] [] check_item_in_log+0x1fe/0x29b [btrfs] [52174.524053] [] replay_dir_deletes+0x167/0x1cf [btrfs] [52174.524053] [] fixup_inode_link_count+0x289/0x2aa [btrfs] [52174.524053] [] fixup_inode_link_counts+0xcb/0x105 [btrfs] [52174.524053] [] btrfs_recover_log_trees+0x258/0x32c [btrfs] [52174.524053] [] ? replay_one_extent+0x511/0x511 [btrfs] [52174.524053] [] open_ctree+0x1dd4/0x21b9 [btrfs] [52174.524053] [] btrfs_mount+0x97e/0xaed [btrfs] [52174.524053] [] ? trace_hardirqs_on+0xd/0xf [52174.524053] [] mount_fs+0x67/0x131 [52174.524053] [] vfs_kern_mount+0x6c/0xde [52174.524053] [] btrfs_mount+0x1ac/0xaed [btrfs] [52174.524053] [] ? trace_hardirqs_on+0xd/0xf [52174.524053] [] ? lockdep_init_map+0xb9/0x1b3 [52174.524053] [] mount_fs+0x67/0x131 [52174.524053] [] vfs_kern_mount+0x6c/0xde [52174.524053] [] do_mount+0x8a6/0x9e8 [52174.524053] [] ? strndup_user+0x3f/0x59 [52174.524053] [] SyS_mount+0x77/0x9f [52174.524053] [] entry_SYSCALL_64_fastpath+0x12/0x6b [52174.561288] ---[ end trace 6b53049efb1a3ea6 ]--- Fix this by forcing a transaction commit when such cases happen. This means we check in the commit root of the subvolume tree if there was any other inode with the same reference when the inode we are fsync'ing is a new inode (created in the current transaction). Test cases for fstests, covering all the scenarios given above, were submitted upstream for fstests: * fstests: generic test for fsync after renaming directory https://patchwork.kernel.org/patch/8694281/ * fstests: generic test for fsync after renaming file https://patchwork.kernel.org/patch/8694301/ * fstests: add btrfs test for fsync after snapshot deletion https://patchwork.kernel.org/patch/8670671/ Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Signed-off-by: Chris Mason Signed-off-by: Sasha Levin commit acc54d8fe91c9173cc1d162016100821629a098a Author: Filipe Manana Date: Thu Jun 25 04:17:46 2015 +0100 Btrfs: fix fsync after truncate when no_holes feature is enabled [ Upstream commit a89ca6f24ffe435edad57de02eaabd37a2c6bff6 ] When we have the no_holes feature enabled, if a we truncate a file to a smaller size, truncate it again but to a size greater than or equals to its original size and fsync it, the log tree will not have any information about the hole covering the range [truncate_1_offset, new_file_size[. Which means if the fsync log is replayed, the file will remain with the state it had before both truncate operations. Without the no_holes feature this does not happen, since when the inode is logged (full sync flag is set) it will find in the fs/subvol tree a leaf with a generation matching the current transaction id that has an explicit extent item representing the hole. Fix this by adding an explicit extent item representing a hole between the last extent and the inode's i_size if we are doing a full sync. The issue is easy to reproduce with the following test case for fstests: . ./common/rc . ./common/filter . ./common/dmflakey _need_to_be_root _supported_fs generic _supported_os Linux _require_scratch _require_dm_flakey # This test was motivated by an issue found in btrfs when the btrfs # no-holes feature is enabled (introduced in kernel 3.14). So enable # the feature if the fs being tested is btrfs. if [ $FSTYP == "btrfs" ]; then _require_btrfs_fs_feature "no_holes" _require_btrfs_mkfs_feature "no-holes" MKFS_OPTIONS="$MKFS_OPTIONS -O no-holes" fi rm -f $seqres.full _scratch_mkfs >>$seqres.full 2>&1 _init_flakey _mount_flakey # Create our test files and make sure everything is durably persisted. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K" \ -c "pwrite -S 0xbb 64K 61K" \ $SCRATCH_MNT/foo | _filter_xfs_io $XFS_IO_PROG -f -c "pwrite -S 0xee 0 64K" \ -c "pwrite -S 0xff 64K 61K" \ $SCRATCH_MNT/bar | _filter_xfs_io sync # Now truncate our file foo to a smaller size (64Kb) and then truncate # it to the size it had before the shrinking truncate (125Kb). Then # fsync our file. If a power failure happens after the fsync, we expect # our file to have a size of 125Kb, with the first 64Kb of data having # the value 0xaa and the second 61Kb of data having the value 0x00. $XFS_IO_PROG -c "truncate 64K" \ -c "truncate 125K" \ -c "fsync" \ $SCRATCH_MNT/foo # Do something similar to our file bar, but the first truncation sets # the file size to 0 and the second truncation expands the size to the # double of what it was initially. $XFS_IO_PROG -c "truncate 0" \ -c "truncate 253K" \ -c "fsync" \ $SCRATCH_MNT/bar _load_flakey_table $FLAKEY_DROP_WRITES _unmount_flakey # Allow writes again, mount to trigger log replay and validate file # contents. _load_flakey_table $FLAKEY_ALLOW_WRITES _mount_flakey # We expect foo to have a size of 125Kb, the first 64Kb of data all # having the value 0xaa and the remaining 61Kb to be a hole (all bytes # with value 0x00). echo "File foo content after log replay:" od -t x1 $SCRATCH_MNT/foo # We expect bar to have a size of 253Kb and no extents (any byte read # from bar has the value 0x00). echo "File bar content after log replay:" od -t x1 $SCRATCH_MNT/bar status=0 exit The expected file contents in the golden output are: File foo content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * 0200000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0372000 File bar content after log replay: 0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0772000 Without this fix, their contents are: File foo content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * 0200000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb * 0372000 File bar content after log replay: 0000000 ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee * 0200000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff * 0372000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0772000 A test case submission for fstests follows soon. Signed-off-by: Filipe Manana Reviewed-by: Liu Bo Signed-off-by: Chris Mason Signed-off-by: Sasha Levin commit 31a0fa94fcf9e4e9ab280488dfb483d434f80580 Author: Filipe Manana Date: Sat Jun 20 00:44:51 2015 +0100 Btrfs: fix fsync xattr loss in the fast fsync path [ Upstream commit 36283bf777d963fac099213297e155d071096994 ] After commit 4f764e515361 ("Btrfs: remove deleted xattrs on fsync log replay"), we can end up in a situation where during log replay we end up deleting xattrs that were never deleted when their file was last fsynced. This happens in the fast fsync path (flag BTRFS_INODE_NEEDS_FULL_SYNC is not set in the inode) if the inode has the flag BTRFS_INODE_COPY_EVERYTHING set, the xattr was added in a past transaction and the leaf where the xattr is located was not updated (COWed or created) in the current transaction. In this scenario the xattr item never ends up in the log tree and therefore at log replay time, which makes the replay code delete the xattr from the fs/subvol tree as it thinks that xattr was deleted prior to the last fsync. Fix this by always logging all xattrs, which is the simplest and most reliable way to detect deleted xattrs and replay the deletes at log replay time. This issue is reproducible with the following test case for fstests: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" here=`pwd` tmp=/tmp/$$ status=1 # failure is the default! _cleanup() { _cleanup_flakey rm -f $tmp.* } trap "_cleanup; exit \$status" 0 1 2 3 15 # get standard environment, filters and checks . ./common/rc . ./common/filter . ./common/dmflakey . ./common/attr # real QA test starts here # We create a lot of xattrs for a single file. Only btrfs and xfs are currently # able to store such a large mount of xattrs per file, other filesystems such # as ext3/4 and f2fs for example, fail with ENOSPC even if we attempt to add # less than 1000 xattrs with very small values. _supported_fs btrfs xfs _supported_os Linux _need_to_be_root _require_scratch _require_dm_flakey _require_attrs _require_metadata_journaling $SCRATCH_DEV rm -f $seqres.full _scratch_mkfs >> $seqres.full 2>&1 _init_flakey _mount_flakey # Create the test file with some initial data and make sure everything is # durably persisted. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" $SCRATCH_MNT/foo | _filter_xfs_io sync # Add many small xattrs to our file. # We create such a large amount because it's needed to trigger the issue found # in btrfs - we need to have an amount that causes the fs to have at least 3 # btree leafs with xattrs stored in them, and it must work on any leaf size # (maximum leaf/node size is 64Kb). num_xattrs=2000 for ((i = 1; i <= $num_xattrs; i++)); do name="user.attr_$(printf "%04d" $i)" $SETFATTR_PROG -n $name -v "val_$(printf "%04d" $i)" $SCRATCH_MNT/foo done # Sync the filesystem to force a commit of the current btrfs transaction, this # is a necessary condition to trigger the bug on btrfs. sync # Now update our file's data and fsync the file. # After a successful fsync, if the fsync log/journal is replayed we expect to # see all the xattrs we added before with the same values (and the updated file # data of course). Btrfs used to delete some of these xattrs when it replayed # its fsync log/journal. $XFS_IO_PROG -c "pwrite -S 0xbb 8K 16K" \ -c "fsync" \ $SCRATCH_MNT/foo | _filter_xfs_io # Simulate a crash/power loss. _load_flakey_table $FLAKEY_DROP_WRITES _unmount_flakey # Allow writes again and mount. This makes the fs replay its fsync log. _load_flakey_table $FLAKEY_ALLOW_WRITES _mount_flakey echo "File content after crash and log replay:" od -t x1 $SCRATCH_MNT/foo echo "File xattrs after crash and log replay:" for ((i = 1; i <= $num_xattrs; i++)); do name="user.attr_$(printf "%04d" $i)" echo -n "$name=" $GETFATTR_PROG --absolute-names -n $name --only-values $SCRATCH_MNT/foo echo done status=0 exit The golden output expects all xattrs to be available, and with the correct values, after the fsync log is replayed. Signed-off-by: Filipe Manana Signed-off-by: Chris Mason Signed-off-by: Sasha Levin commit 90476a448c6f7dfd571a8a8ee6c1973f64b8c04c Author: Filipe Manana Date: Wed Jun 17 12:49:23 2015 +0100 Btrfs: fix fsync data loss after append write [ Upstream commit e4545de5b035c7debb73d260c78377dbb69cbfb5 ] If we do an append write to a file (which increases its inode's i_size) that does not have the flag BTRFS_INODE_NEEDS_FULL_SYNC set in its inode, and the previous transaction added a new hard link to the file, which sets the flag BTRFS_INODE_COPY_EVERYTHING in the file's inode, and then fsync the file, the inode's new i_size isn't logged. This has the consequence that after the fsync log is replayed, the file size remains what it was before the append write operation, which means users/applications will not be able to read the data that was successsfully fsync'ed before. This happens because neither the inode item nor the delayed inode get their i_size updated when the append write is made - doing so would require starting a transaction in the buffered write path, something that we do not do intentionally for performance reasons. Fix this by making sure that when the flag BTRFS_INODE_COPY_EVERYTHING is set the inode is logged with its current i_size (log the in-memory inode into the log tree). This issue is not a recent regression and is easy to reproduce with the following test case for fstests: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" here=`pwd` tmp=/tmp/$$ status=1 # failure is the default! _cleanup() { _cleanup_flakey rm -f $tmp.* } trap "_cleanup; exit \$status" 0 1 2 3 15 # get standard environment, filters and checks . ./common/rc . ./common/filter . ./common/dmflakey # real QA test starts here _supported_fs generic _supported_os Linux _need_to_be_root _require_scratch _require_dm_flakey _require_metadata_journaling $SCRATCH_DEV _crash_and_mount() { # Simulate a crash/power loss. _load_flakey_table $FLAKEY_DROP_WRITES _unmount_flakey # Allow writes again and mount. This makes the fs replay its fsync log. _load_flakey_table $FLAKEY_ALLOW_WRITES _mount_flakey } rm -f $seqres.full _scratch_mkfs >> $seqres.full 2>&1 _init_flakey _mount_flakey # Create the test file with some initial data and then fsync it. # The fsync here is only needed to trigger the issue in btrfs, as it causes the # the flag BTRFS_INODE_NEEDS_FULL_SYNC to be removed from the btrfs inode. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" \ -c "fsync" \ $SCRATCH_MNT/foo | _filter_xfs_io sync # Add a hard link to our file. # On btrfs this sets the flag BTRFS_INODE_COPY_EVERYTHING on the btrfs inode, # which is a necessary condition to trigger the issue. ln $SCRATCH_MNT/foo $SCRATCH_MNT/bar # Sync the filesystem to force a commit of the current btrfs transaction, this # is a necessary condition to trigger the bug on btrfs. sync # Now append more data to our file, increasing its size, and fsync the file. # In btrfs because the inode flag BTRFS_INODE_COPY_EVERYTHING was set and the # write path did not update the inode item in the btree nor the delayed inode # item (in memory struture) in the current transaction (created by the fsync # handler), the fsync did not record the inode's new i_size in the fsync # log/journal. This made the data unavailable after the fsync log/journal is # replayed. $XFS_IO_PROG -c "pwrite -S 0xbb 32K 32K" \ -c "fsync" \ $SCRATCH_MNT/foo | _filter_xfs_io echo "File content after fsync and before crash:" od -t x1 $SCRATCH_MNT/foo _crash_and_mount echo "File content after crash and log replay:" od -t x1 $SCRATCH_MNT/foo status=0 exit The expected file output before and after the crash/power failure expects the appended data to be available, which is: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * 0100000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb * 0200000 Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana Reviewed-by: Liu Bo Signed-off-by: Chris Mason Signed-off-by: Sasha Levin commit 34caf1dc30b288cc94a0d44e7e9a133de8246062 Author: Jerome Marchand Date: Wed Apr 6 14:06:48 2016 +0100 assoc_array: don't call compare_object() on a node [ Upstream commit 8d4a2ec1e0b41b0cf9a0c5cd4511da7f8e4f3de2 ] Changes since V1: fixed the description and added KASan warning. In assoc_array_insert_into_terminal_node(), we call the compare_object() method on all non-empty slots, even when they're not leaves, passing a pointer to an unexpected structure to compare_object(). Currently it causes an out-of-bound read access in keyring_compare_object detected by KASan (see below). The issue is easily reproduced with keyutils testsuite. Only call compare_object() when the slot is a leave. KASan warning: ================================================================== BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838 Read of size 8 by task keyctl/1655 ============================================================================= BUG kmalloc-192 (Not tainted): kasan: bad access detected ----------------------------------------------------------------------------- Disabling lock debugging due to kernel taint INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647 ___slab_alloc+0x563/0x5c0 __slab_alloc+0x51/0x90 kmem_cache_alloc_trace+0x263/0x300 assoc_array_insert+0xfd0/0x3a60 __key_link_begin+0xfc/0x270 key_create_or_update+0x459/0xaf0 SyS_add_key+0x1ba/0x350 entry_SYSCALL_64_fastpath+0x12/0x76 INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080 INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1 Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00 ...`............ Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff ...`.......`.... Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ CPU: 0 PID: 1655 Comm: keyctl Tainted: G B 4.5.0-rc4-kasan+ #291 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491 ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969 ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650 Call Trace: [] dump_stack+0x85/0xc4 [] print_trailer+0xf9/0x150 [] object_err+0x34/0x40 [] kasan_report_error+0x230/0x550 [] ? keyring_get_key_chunk+0x13e/0x210 [] __asan_report_load_n_noabort+0x5d/0x70 [] ? keyring_compare_object+0x213/0x240 [] keyring_compare_object+0x213/0x240 [] assoc_array_insert+0x86c/0x3a60 [] ? assoc_array_cancel_edit+0x70/0x70 [] ? __key_link_begin+0x20d/0x270 [] __key_link_begin+0xfc/0x270 [] key_create_or_update+0x459/0xaf0 [] ? trace_hardirqs_on+0xd/0x10 [] ? key_type_lookup+0xc0/0xc0 [] ? lookup_user_key+0x13d/0xcd0 [] ? memdup_user+0x53/0x80 [] SyS_add_key+0x1ba/0x350 [] ? key_get_type_from_user.constprop.6+0xa0/0xa0 [] ? retint_user+0x18/0x23 [] ? trace_hardirqs_on_caller+0x3fe/0x580 [] ? trace_hardirqs_on_thunk+0x17/0x19 [] entry_SYSCALL_64_fastpath+0x12/0x76 Memory state around the buggy address: ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00 ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc >ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ^ ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00 ================================================================== Signed-off-by: Jerome Marchand Signed-off-by: David Howells cc: stable@vger.kernel.org Signed-off-by: Sasha Levin commit 4894948b58823faa803c247e6ae8409d1fd30575 Author: Dennis Kadioglu Date: Wed Apr 6 08:39:01 2016 +0200 ALSA: usb-audio: Add a quirk for Plantronics BT300 [ Upstream commit b4203ff5464da00b7812e7b480192745b0d66bbf ] Plantronics BT300 does not support reading the sample rate which leads to many lines of "cannot get freq at ep 0x1". This patch adds the USB ID of the BT300 to quirks.c and avoids those error messages. Signed-off-by: Dennis Kadioglu Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit b4774f9c920e558f1e0ba4e0dab9ab00e192494f Author: David Disseldorp Date: Tue Apr 5 11:13:39 2016 +0200 rbd: use GFP_NOIO consistently for request allocations [ Upstream commit 2224d879c7c0f85c14183ef82eb48bd875ceb599 ] As of 5a60e87603c4c533492c515b7f62578189b03c9c, RBD object request allocations are made via rbd_obj_request_create() with GFP_NOIO. However, subsequent OSD request allocations in rbd_osd_req_create*() use GFP_ATOMIC. With heavy page cache usage (e.g. OSDs running on same host as krbd client), rbd_osd_req_create() order-1 GFP_ATOMIC allocations have been observed to fail, where direct reclaim would have allowed GFP_NOIO allocations to succeed. Cc: stable@vger.kernel.org # 3.18+ Suggested-by: Vlastimil Babka Suggested-by: Neil Brown Signed-off-by: David Disseldorp Signed-off-by: Ilya Dryomov Signed-off-by: Sasha Levin commit 7c134b078cb5738f10d1d1533ab1895864b958f9 Author: Paolo Bonzini Date: Thu Mar 31 09:38:51 2016 +0200 compiler-gcc: disable -ftracer for __noclone functions [ Upstream commit 95272c29378ee7dc15f43fa2758cb28a5913a06d ] -ftracer can duplicate asm blocks causing compilation to fail in noclone functions. For example, KVM declares a global variable in an asm like asm("2: ... \n .pushsection data \n .global vmx_return \n vmx_return: .long 2b"); and -ftracer causes a double declaration. Cc: Andrew Morton Cc: Michal Marek Cc: stable@vger.kernel.org Cc: kvm@vger.kernel.org Reported-by: Linda Walsh Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin commit 677fa15cd6d5b0843e7b9c58409f67d656b1ec2f Author: Joe Perches Date: Thu Jun 25 15:01:02 2015 -0700 compiler-gcc: integrate the various compiler-gcc[345].h files [ Upstream commit f320793e52aee78f0fbb8bcaf10e6614d2e67bfc ] [ Upstream commit cb984d101b30eb7478d32df56a0023e4603cba7f ] As gcc major version numbers are going to advance rather rapidly in the future, there's no real value in separate files for each compiler version. Deduplicate some of the macros #defined in each file too. Neaten comments using normal kernel commenting style. Signed-off-by: Joe Perches Cc: Andi Kleen Cc: Michal Marek Cc: Segher Boessenkool Cc: Sasha Levin Cc: Anton Blanchard Cc: Alan Modra Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin commit 09cbe7dba48375f1638d04fe85224c15c178a197 Author: Yoshihiro Shimoda Date: Mon Apr 4 20:40:20 2016 +0900 usb: renesas_usbhs: fix to avoid using a disabled ep in usbhsg_queue_done() [ Upstream commit 4fccb0767fdbdb781a9c5b5c15ee7b219443c89d ] This patch fixes an issue that usbhsg_queue_done() may cause kernel panic when dma callback is running and usb_ep_disable() is called by interrupt handler. (Especially, we can reproduce this issue using g_audio with usb-dmac driver.) For example of a flow: usbhsf_dma_complete (on tasklet) --> usbhsf_pkt_handler (on tasklet) --> usbhsg_queue_done (on tasklet) *** interrupt happened and usb_ep_disable() is called *** --> usbhsg_queue_pop (on tasklet) Then, oops happened. Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support") Cc: # v3.1+ Signed-off-by: Yoshihiro Shimoda Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit ccf7f20931bd04cd419d178d4f3a7c059690a924 Author: Yoshihiro Shimoda Date: Mon Mar 16 16:36:48 2015 +0900 usb: renesas_usbhs: fix spinlock suspected in a gadget complete function [ Upstream commit 00f30d29b497577954b20237b405e9d22b5286c2 ] According to the gadget.h, a "complete" function will always be called with interrupts disabled. However, sometimes usbhsg_queue_pop() function is called with interrupts enabled. So, this function should be held by usbhs_lock() to disable interruption. Also, this driver has to call spin_unlock() to avoid spinlock recursion by this driver before calling usb_gadget_giveback_request(). Otherwise, there is possible to cause a spinlock suspected in a gadget complete function. Signed-off-by: Yoshihiro Shimoda Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit 1491937f7897c398121ac284a3bc17203c992a12 Author: Boris Ostrovsky Date: Fri Mar 18 10:11:07 2016 -0400 xen/events: Mask a moving irq [ Upstream commit ff1e22e7a638a0782f54f81a6c9cb139aca2da35 ] Moving an unmasked irq may result in irq handler being invoked on both source and target CPUs. With 2-level this can happen as follows: On source CPU: evtchn_2l_handle_events() -> generic_handle_irq() -> handle_edge_irq() -> eoi_pirq(): irq_move_irq(data); /***** WE ARE HERE *****/ if (VALID_EVTCHN(evtchn)) clear_evtchn(evtchn); If at this moment target processor is handling an unrelated event in evtchn_2l_handle_events()'s loop it may pick up our event since target's cpu_evtchn_mask claims that this event belongs to it *and* the event is unmasked and still pending. At the same time, source CPU will continue executing its own handle_edge_irq(). With FIFO interrupt the scenario is similar: irq_move_irq() may result in a EVTCHNOP_unmask hypercall which, in turn, may make the event pending on the target CPU. We can avoid this situation by moving and clearing the event while keeping event masked. Signed-off-by: Boris Ostrovsky Cc: stable@vger.kernel.org Signed-off-by: David Vrabel Signed-off-by: Sasha Levin commit bdd3700374c2a7281cd54bb24680bdea2372edc4 Author: Takashi Iwai Date: Mon Apr 4 11:47:50 2016 +0200 ALSA: usb-audio: Add a sample rate quirk for Phoenix Audio TMX320 [ Upstream commit f03b24a851d32ca85dacab01785b24a7ee717d37 ] Phoenix Audio TMX320 gives the similar error when the sample rate is asked: usb 2-1.3: 2:1: cannot get freq at ep 0x85 usb 2-1.3: 1:1: cannot get freq at ep 0x2 .... Add the corresponding USB-device ID (1de7:0014) to snd_usb_get_sample_rate_quirk() list. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=110221 Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit b672486754838d943a7a2769ed6e55ef169dcd12 Author: Anssi Hannula Date: Sun Dec 13 20:49:59 2015 +0200 ALSA: usb-audio: Add sample rate inquiry quirk for AudioQuest DragonFly [ Upstream commit 12a6116e66695a728bcb9616416c508ce9c051a1 ] Avoid getting sample rate on AudioQuest DragonFly as it is unsupported and causes noisy "cannot get freq at ep 0x1" messages when playback starts. Signed-off-by: Anssi Hannula Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit 186b929c6b1e2db6310858f438b178cc2d00a876 Author: Eric Wong Date: Sat May 30 09:15:39 2015 +0000 ALSA: usb-audio: don't try to get Outlaw RR2150 sample rate [ Upstream commit 2f80b2958abe5658000d5ad9b45a36ecf879666e ] This quirk allows us to avoid the noisy: current rate 0 is different from the runtime rate message every time playback starts. While USB DAC in the RR2150 supports reading the sample rate, it never returns a sample rate other than zero in my observation with common sample rates. Signed-off-by: Eric Wong Cc: Joe Turner Cc: Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin commit 3c853788bdd85e73e9eb5bb8d3f40201a933ad93 Author: Theodore Ts'o Date: Sun Apr 3 17:03:37 2016 -0400 ext4: ignore quota mount options if the quota feature is enabled [ Upstream commit c325a67c72903e1cc30e990a15ce745bda0dbfde ] Previously, ext4 would fail the mount if the file system had the quota feature enabled and quota mount options (used for the older quota setups) were present. This broke xfstests, since xfs silently ignores the usrquote and grpquota mount options if they are specified. This commit changes things so that we are consistent with xfs; having the mount options specified is harmless, so no sense break users by forbidding them. Cc: stable@vger.kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Sasha Levin commit 1cbab78d2f11ad12efc8b61e2ae66f020f7b8ce1 Author: Yuki Shibuya Date: Thu Mar 24 05:17:03 2016 +0000 KVM: x86: Inject pending interrupt even if pending nmi exist [ Upstream commit 321c5658c5e9192dea0d58ab67cf1791e45b2b26 ] Non maskable interrupts (NMI) are preferred to interrupts in current implementation. If a NMI is pending and NMI is blocked by the result of nmi_allowed(), pending interrupt is not injected and enable_irq_window() is not executed, even if interrupts injection is allowed. In old kernel (e.g. 2.6.32), schedule() is often called in NMI context. In this case, interrupts are needed to execute iret that intends end of NMI. The flag of blocking new NMI is not cleared until the guest execute the iret, and interrupts are blocked by pending NMI. Due to this, iret can't be invoked in the guest, and the guest is starved until block is cleared by some events (e.g. canceling injection). This patch injects pending interrupts, when it's allowed, even if NMI is blocked. And, If an interrupts is pending after executing inject_pending_event(), enable_irq_window() is executed regardless of NMI pending counter. Cc: stable@vger.kernel.org Signed-off-by: Yuki Shibuya Suggested-by: Paolo Bonzini Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin commit 4fcac08014394972bfb9b63e10e841770ad34b8e Author: Theodore Ts'o Date: Fri Apr 1 01:31:28 2016 -0400 ext4: add lockdep annotations for i_data_sem [ Upstream commit daf647d2dd58cec59570d7698a45b98e580f2076 ] With the internal Quota feature, mke2fs creates empty quota inodes and quota usage tracking is enabled as soon as the file system is mounted. Since quotacheck is no longer preallocating all of the blocks in the quota inode that are likely needed to be written to, we are now seeing a lockdep false positive caused by needing to allocate a quota block from inside ext4_map_blocks(), while holding i_data_sem for a data inode. This results in this complaint: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ei->i_data_sem); lock(&s->s_dquot.dqio_mutex); lock(&ei->i_data_sem); lock(&s->s_dquot.dqio_mutex); Google-Bug-Id: 27907753 Signed-off-by: Theodore Ts'o Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin commit 9838542a2b2ea96f5a54e2b31472b7a5b2628050 Author: Martin K. Petersen Date: Mon Mar 28 21:18:56 2016 -0400 sd: Fix excessive capacity printing on devices with blocks bigger than 512 bytes [ Upstream commit f08bb1e0dbdd0297258d0b8cd4dbfcc057e57b2a ] During revalidate we check whether device capacity has changed before we decide whether to output disk information or not. The check for old capacity failed to take into account that we scaled sdkp->capacity based on the reported logical block size. And therefore the capacity test would always fail for devices with sectors bigger than 512 bytes and we would print several copies of the same discovery information. Avoid scaling sdkp->capacity and instead adjust the value on the fly when setting the block device capacity and generating fake C/H/S geometry. Signed-off-by: Martin K. Petersen Cc: Reported-by: Hannes Reinecke Reviewed-by: Hannes Reinicke Reviewed-by: Ewan Milne Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit fb6e2ebb91f21839aa13c40a6f71ac6423e4c64e Author: Oliver Neukum Date: Thu Mar 31 12:04:26 2016 -0400 USB: digi_acceleport: do sanity checking for the number of ports [ Upstream commit 5a07975ad0a36708c6b0a5b9fea1ff811d0b0c1f ] The driver can be crashed with devices that expose crafted descriptors with too few endpoints. See: http://seclists.org/bugtraq/2016/Mar/61 Signed-off-by: Oliver Neukum [johan: fix OOB endpoint check and add error messages ] Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit 55e18b81b1d3755288aa6234d9439bdd95b3f58f Author: Oliver Neukum Date: Thu Mar 31 12:04:25 2016 -0400 USB: cypress_m8: add endpoint sanity check [ Upstream commit c55aee1bf0e6b6feec8b2927b43f7a09a6d5f754 ] An attack using missing endpoints exists. CVE-2016-3137 Signed-off-by: Oliver Neukum CC: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit e8f4639414972d17224cd816be7b89a00840b09e Author: Oliver Neukum Date: Thu Mar 31 12:04:24 2016 -0400 USB: mct_u232: add sanity checking in probe [ Upstream commit 4e9a0b05257f29cf4b75f3209243ed71614d062e ] An attack using the lack of sanity checking in probe is known. This patch checks for the existence of a second port. CVE-2016-3136 Signed-off-by: Oliver Neukum CC: stable@vger.kernel.org [johan: add error message ] Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin commit ee63f2b1ef65d29a206a1bcfdd6573bacf3932d1 Author: John Keeping Date: Wed Nov 18 11:17:25 2015 +0000 drm/qxl: fix cursor position with non-zero hotspot [ Upstream commit d59a1f71ff1aeda4b4630df92d3ad4e3b1dfc885 ] The SPICE protocol considers the position of a cursor to be the location of its active pixel on the display, so the cursor is drawn with its top-left corner at "(x - hot_spot_x, y - hot_spot_y)" but the DRM cursor position gives the location where the top-left corner should be drawn, with the hotspot being a hint for drivers that need it. This fixes the location of the window resize cursors when using Fluxbox with the QXL DRM driver and both the QXL and modesetting X drivers. Signed-off-by: John Keeping Reviewed-by: Daniel Vetter Cc: stable@vger.kernel.org Link: http://patchwork.freedesktop.org/patch/msgid/1447845445-2116-1-git-send-email-john@metanate.com Signed-off-by: Jani Nikula Signed-off-by: Sasha Levin commit fd06e77766cf86f28b37e9eaf784c25111b45f7c Author: Yoshihiro Shimoda Date: Thu Mar 10 11:30:15 2016 +0900 usb: renesas_usbhs: disable TX IRQ before starting TX DMAC transfer [ Upstream commit 6490865c67825277b29638e839850882600b48ec ] This patch adds a code to surely disable TX IRQ of the pipe before starting TX DMAC transfer. Otherwise, a lot of unnecessary TX IRQs may happen in rare cases when DMAC is used. Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support") Cc: # v3.1+ Signed-off-by: Yoshihiro Shimoda Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit 237200d1c1ec9453d491da82a7c4819a38f51344 Author: Yoshihiro Shimoda Date: Thu Mar 10 11:30:14 2016 +0900 usb: renesas_usbhs: avoid NULL pointer derefernce in usbhsf_pkt_handler() [ Upstream commit 894f2fc44f2f3f48c36c973b1123f6ab298be160 ] When unexpected situation happened (e.g. tx/rx irq happened while DMAC is used), the usbhsf_pkt_handler() was possible to cause NULL pointer dereference like the followings: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = c0004000 [00000000] *pgd=00000000 Internal error: Oops: 80000007 [#1] SMP ARM Modules linked in: usb_f_acm u_serial g_serial libcomposite CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.5.0-rc6-00842-gac57066-dirty #63 Hardware name: Generic R8A7790 (Flattened Device Tree) task: c0729c00 ti: c0724000 task.ti: c0724000 PC is at 0x0 LR is at usbhsf_pkt_handler+0xac/0x118 pc : [<00000000>] lr : [] psr: 60000193 sp : c0725db8 ip : 00000000 fp : c0725df4 r10: 00000001 r9 : 00000193 r8 : ef3ccab4 r7 : ef3cca10 r6 : eea4586c r5 : 00000000 r4 : ef19ceb4 r3 : 00000000 r2 : 0000009c r1 : c0725dc4 r0 : ef19ceb4 This patch adds a condition to avoid the dereference. Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support") Cc: # v3.1+ Signed-off-by: Yoshihiro Shimoda Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin commit 2898d8f1c8623203ddb54fc14e215fe62f8f84aa Author: Lokesh Vutla Date: Sat Mar 26 23:08:55 2016 -0600 ARM: OMAP2+: hwmod: Fix updating of sysconfig register [ Upstream commit 3ca4a238106dedc285193ee47f494a6584b6fd2f ] Commit 127500ccb766f ("ARM: OMAP2+: Only write the sysconfig on idle when necessary") talks about verification of sysconfig cache value before updating it, only during idle path. But the patch is adding the verification in the enable path. So, adding the check in a proper place as per the commit description. Not keeping this check during enable path as there is a chance of losing context and it is safe to do on idle as the context of the register will never be lost while the device is active. Signed-off-by: Lokesh Vutla Acked-by: Tero Kristo Cc: Jon Hunter Cc: # 3.12+ Fixes: commit 127500ccb766 "ARM: OMAP2+: Only write the sysconfig on idle when necessary" [paul@pwsan.com: appears to have been caused by my own mismerge of the originally posted patch] Signed-off-by: Paul Walmsley Signed-off-by: Sasha Levin commit a57a632774a1918bf3b1e4233cfac5c5ac22af81 Author: Alan Stern Date: Wed Mar 23 12:17:09 2016 -0400 HID: usbhid: fix inconsistent reset/resume/reset-resume behavior [ Upstream commit 972e6a993f278b416a8ee3ec65475724fc36feb2 ] The usbhid driver has inconsistently duplicated code in its post-reset, resume, and reset-resume pathways. reset-resume doesn't check HID_STARTED before trying to restart the I/O queues. resume fails to clear the HID_SUSPENDED flag if HID_STARTED isn't set. resume calls usbhid_restart_queues() with usbhid->lock held and the others call it without holding the lock. The first item in particular causes a problem following a reset-resume if the driver hasn't started up its I/O. URB submission fails because usbhid->urbin is NULL, and this triggers an unending reset-retry loop. This patch fixes the problem by creating a new subroutine, hid_restart_io(), to carry out all the common activities. It also adds some checks that were missing in the original code: After a reset, there's no need to clear any halted endpoints. After a resume, if a reset is pending there's no need to restart any I/O until the reset is finished. After a resume, if the interrupt-IN endpoint is halted there's no need to submit the input URB until the halt has been cleared. Signed-off-by: Alan Stern Reported-by: Daniel Fraga Tested-by: Daniel Fraga CC: Signed-off-by: Jiri Kosina Signed-off-by: Sasha Levin